This document contains only library issues which have been closed
by the Library Working Group (LWG) after being found to be defects
in the standard. That is, issues which have a status of DR,
TC1, C++11,
or Resolved. See the
Library Closed Issues List for issues closed as non-defects. See the
Library Active Issues List for active issues and more information. The
introductory material in that document also applies to this
document.

R24:
Post-Santa Cruz mailing: reflects decisions made at the Santa Cruz
meeting. All Ready issues from R23 with the exception of 253,
which has been given a new proposed resolution, were
moved to DR status. Added new issues 383-389.
(Issues 387-389 were discussed
at the meeting.) Made progress on issues 225, 226,
229: 225 and 229 have been moved to
Ready status, and the only remaining concerns with 226 involve wording.

R23:
Pre-Santa Cruz mailing. Added new issues 367-382.
Moved issues in the TC to TC status.

R20:
Post-Redmond mailing; reflects actions taken in Redmond. Added
new issues 336-350, of which issues
347-350 were added since Redmond, hence
not discussed at the meeting.
All Ready issues were moved to DR status, with the exception of issues
284, 241, and 267.
Noteworthy issues discussed at Redmond include
120202, 226, 233,
270, 253, 254, 323.

Defect Reports

1. C library linkage editing oversight

The change specified in the proposed resolution below did not make
it into the Standard. This change was accepted in principle at the
London meeting, and the exact wording below was accepted at the
Morristown meeting.

Proposed resolution:

Change 17.6.2.3 [using.linkage] paragraph 2
from:

It is unspecified whether a name from the Standard C library
declared with external linkage has either extern "C" or
extern "C++" linkage.

to:

Whether a name from the Standard C library declared with external
linkage has extern "C" or extern "C++" linkage
is implementation defined. It is recommended that an implementation
use extern "C++" linkage for this purpose.

Function F registered with atexit has a local static variable t,
and F is called for the first time during exit processing. A local
static object is initialized the first time control flow passes
through its definition, and all static objects are destroyed during
exit processing. Is the code valid? If so, what are its semantics?

Section 18.3 "Start and termination" says that if a function
F is registered with atexit before a static object t is initialized, F
will not be called until after t's destructor completes.

In example 2, function F is registered with atexit before its local
static object O could possibly be initialized. On that basis, it must
not be called by exit processing until after O's destructor
completes. But the destructor cannot be run until after F is called,
since otherwise the object could not be constructed in the first
place.

If the program is valid, the standard is self-contradictory about
its semantics.

I plan to submit Example 1 as a public comment on the C9X CD, with
a recommendation that the results be undefined. (Alternative: make it
unspecified. I don't think it is worthwhile to specify the case where
f1 itself registers additional functions, each of which registers
still more functions.)

I think we should resolve the situation in the whatever way the C
committee decides.

For Example 2, I recommend we declare the results undefined.

[See reflector message lib-6500 for further discussion.]

Proposed resolution:

Change section 18.3/8 from:

First, objects with static storage duration are destroyed and
functions registered by calling atexit are called. Objects with
static storage duration are destroyed in the reverse order of the
completion of their constructor. (Automatic objects are not
destroyed as a result of calling exit().) Functions registered with
atexit are called in the reverse order of their registration. A
function registered with atexit before an object obj1 of static
storage duration is initialized will not be called until obj1's
destruction has completed. A function registered with atexit after
an object obj2 of static storage duration is initialized will be
called before obj2's destruction starts.

to:

First, objects with static storage duration are destroyed and
functions registered by calling atexit are called. Non-local objects
with static storage duration are destroyed in the reverse order of
the completion of their constructor. (Automatic objects are not
destroyed as a result of calling exit().) Functions registered with
atexit are called in the reverse order of their registration, except
that a function is called after any previously registered functions
that had already been called at the time it was registered. A
function registered with atexit before a non-local object obj1 of
static storage duration is initialized will not be called until
obj1's destruction has completed. A function registered with atexit
after a non-local object obj2 of static storage duration is
initialized will be called before obj2's destruction starts. A local
static object obj3 is destroyed at the same time it would be if a
function calling the obj3 destructor were registered with atexit at
the completion of the obj3 constructor.

Rationale:

See 99-0039/N1215, October 22, 1999, by Stephen D. Clamage for the analysis
supporting to the proposed resolution.

At the very end of the basic_string class definition is the signature: int
compare(size_type pos1, size_type n1, const charT* s, size_type n2 = npos) const; In the
following text this is defined as: returns
basic_string<charT,traits,Allocator>(*this,pos1,n1).compare(
basic_string<charT,traits,Allocator>(s,n2);

Since the constructor basic_string(const charT* s, size_type n, const Allocator& a
= Allocator()) clearly requires that s != NULL and n < npos and further states that it
throws length_error if n == npos, it appears the compare() signature above should always
throw length error if invoked like so: str.compare(1, str.size()-1, s); where 's' is some
null terminated character array.

This appears to be a typo since the obvious intent is to allow either the call above or
something like: str.compare(1, str.size()-1, s, strlen(s)-1);

7. String clause minor problems

(1) In 21.4.6.4 [string::insert], the description of template
<class InputIterator> insert(iterator, InputIterator,
InputIterator) makes no sense. It refers to a member function that
doesn't exist. It also talks about the return value of a void
function.

(2) Several versions of basic_string::replace don't appear in the
class synopsis.

(3) basic_string::push_back appears in the synopsis, but is never
described elsewhere. In the synopsis its argument is const charT,
which doesn't makes much sense; it should probably be charT, or
possible const charT&.

(4) basic_string::pop_back is missing.

(5) int compare(size_type pos, size_type n1, charT* s, size_type n2
= npos) make no sense. First, it's const charT* in the synopsis and
charT* in the description. Second, given what it says in RETURNS,
leaving out the final argument will always result in an exception
getting thrown. This is paragraphs 5 and 6 of
21.4.6.8 [string::swap]

(6) In table 37, in section 21.2.1 [char.traits.require],
there's a note for X::move(s, p, n). It says "Copies correctly
even where p is in [s, s+n)". This is correct as far as it goes,
but it doesn't go far enough; it should also guarantee that the copy
is correct even where s in in [p, p+n). These are two orthogonal
guarantees, and neither one follows from the other. Both guarantees
are necessary if X::move is supposed to have the same sort of
semantics as memmove (which was clearly the intent), and both
guarantees are necessary if X::move is actually supposed to be
useful.

8. Locale::global lacks guarantee

It appears there's an important guarantee missing from clause
22. We're told that invoking locale::global(L) sets the C locale if L
has a name. However, we're not told whether or not invoking
setlocale(s) sets the global C++ locale.

The intent, I think, is that it should not, but I can't find any
such words anywhere.

Proposed resolution:

Add a sentence at the end of 22.3.1.5 [locale.statics],
paragraph 2:

No library function other than locale::global() shall affect
the value returned by locale().

9. Operator new(0) calls should not yield the same pointer

Scott Meyers, in a comp.std.c++ posting: I just noticed that
section 3.7.3.1 of CD2 seems to allow for the possibility that all
calls to operator new(0) yield the same pointer, an implementation
technique specifically prohibited by ARM 5.3.3.Was this prohibition
really lifted? Does the FDIS agree with CD2 in the regard? [Issues
list maintainer's note: the IS is the same.]

Proposed resolution:

Change the last paragraph of 3.7.3 from:

Any allocation and/or deallocation functions defined in a C++ program shall
conform to the semantics specified in 3.7.3.1 and 3.7.3.2.

to:

Any allocation and/or deallocation functions defined in a C++ program,
including the default versions in the library, shall conform to the semantics
specified in 3.7.3.1 and 3.7.3.2.

Change 3.7.3.1/2, next-to-last sentence, from :

If the size of the space requested is zero, the value returned shall not be
a null pointer value (4.10).

to:

Even if the size of the space requested is zero, the request can fail. If
the request succeeds, the value returned shall be a non-null pointer value
(4.10) p0 different from any previously returned value p1, unless that value
p1 was since passed to an operator delete.

5.3.4/7 currently reads:

When the value of the expression in a direct-new-declarator is zero, the
allocation function is called to allocate an array with no elements. The
pointer returned by the new-expression is non-null. [Note: If the library
allocation function is called, the pointer returned is distinct from the
pointer to any other object.]

Retain the first sentence, and delete the remainder.

18.5.1 currently has no text. Add the following:

Except where otherwise specified, the provisions of 3.7.3 apply to the
library versions of operator new and operator delete.

To 18.5.1.3, add the following text:

The provisions of 3.7.3 do not apply to these reserved placement forms of
operator new and operator delete.

Rationale:

See 99-0040/N1216, October 22, 1999, by Stephen D. Clamage for the analysis
supporting to the proposed resolution.

11. Bitset minor problems

(1) bitset<>::operator[] is mentioned in the class synopsis (23.3.5), but it is
not documented in 23.3.5.2.

(2) The class synopsis only gives a single signature for bitset<>::operator[],
reference operator[](size_t pos). This doesn't make much sense. It ought to be overloaded
on const. reference operator[](size_t pos); bool operator[](size_t pos) const.

(3) Bitset's stream input function (23.3.5.3) ought to skip all whitespace before
trying to extract 0s and 1s. The standard doesn't explicitly say that, though. This should
go in the Effects clause.

Proposed resolution:

ITEMS 1 AND 2:

In the bitset synopsis (20.5 [template.bitset]),
replace the member function

reference operator[](size_t pos);

with the two member functions

bool operator[](size_t pos) const;
reference operator[](size_t pos);

Add the following text at the end of 20.5.2 [bitset.members],
immediately after paragraph 45:

bitset<N>::reference operator[](size_t pos);
Requires: pos is valid
Throws: nothing
Returns: An object of type bitset<N>::reference such that (*this)[pos]
== this->test(pos), and such that (*this)[pos] = val is equivalent to this->set(pos,
val);

Rationale:

The LWG believes Item 3 is not a defect. "Formatted
input" implies the desired semantics. See 27.7.2.2 [istream.formatted].

14. Locale::combine should be const

locale::combine is the only member function of locale (other than constructors and
destructor) that is not const. There is no reason for it not to be const, and good reasons
why it should have been const. Furthermore, leaving it non-const conflicts with 22.1.1
paragraph 6: "An instance of a locale is immutable."

History: this member function originally was a constructor. it happened that the
interface it specified had no corresponding language syntax, so it was changed to a member
function. As constructors are never const, there was no "const" in the interface
which was transformed into member "combine". It should have been added at that
time, but the omission was not noticed.

Proposed resolution:

In 22.3.1 [locale] and also in 22.3.1.3 [locale.members], add
"const" to the declaration of member combine:

This section describes the process of parsing a text boolean value from the input
stream. It does not say it recognizes either of the sequences "true" or
"false" and returns the corresponding bool value; instead, it says it recognizes
only one of those sequences, and chooses which according to the received value of a
reference argument intended for returning the result, and reports an error if the other
sequence is found. (!) Furthermore, it claims to get the names from the ctype<>
facet rather than the numpunct<> facet, and it examines the "boolalpha"
flag wrongly; it doesn't define the value "loc"; and finally, it computes
wrongly whether to use numeric or "alpha" parsing.

Notice this works reasonably when the candidate strings are both empty, or equal, or
when one is a substring of the other. The proposed text below captures the logic of the
code above.

Proposed resolution:

In 22.4.2.1.2 [facet.num.get.virtuals], in the first line of paragraph 14,
change "&&" to "&".

Then, replace paragraphs 15 and 16 as follows:

Otherwise target sequences are determined "as if" by
calling the members falsename() and
truename() of the facet obtained by
use_facet<numpunct<charT> >(str.getloc()).
Successive characters in the range [in,end) (see
[lib.sequence.reqmts]) are obtained and matched against
corresponding positions in the target sequences only as necessary to
identify a unique match. The input iterator in is
compared to end only when necessary to obtain a
character. If and only if a target sequence is uniquely matched,
val is set to the corresponding value.

The in iterator is always left pointing one position beyond the last character
successfully matched. If val is set, then err is set to str.goodbit; or to
str.eofbit if, when seeking another character to match, it is found that
(in==end). If val is not set, then err is set to str.failbit; or to
(str.failbit|str.eofbit)if
the reason for the failure was that (in==end). [Example: for targets
true:"a" and false:"abb", the input sequence "a" yields
val==true and err==str.eofbit; the input sequence "abc" yields
err=str.failbit, with in ending at the 'c' element. For targets
true:"1"
and false:"0", the input sequence "1" yields val==true
and err=str.goodbit. For empty targets (""), any input sequence yields
err==str.failbit. --end example]

In the list of num_get<> non-virtual members on page 22-23, the member
that parses bool values was omitted from the list of definitions of non-virtual
members, though it is listed in the class definition and the corresponding
virtual is listed everywhere appropriate.

Proposed resolution:

Add at the beginning of 22.4.2.1.1 [facet.num.get.members]
another get member for bool&, copied from the entry in
22.4.2.1 [locale.num.get].

In the definitions of codecvt<>::do_out and do_in, they are
specified to return noconv if "no conversion is
needed". This definition is too vague, and does not say
normatively what is done with the buffers.

Proposed resolution:

Change the entry for noconv in the table under paragraph 4 in section
22.4.1.4.2 [locale.codecvt.virtuals] to read:

noconv: internT and externT are the same type,
and input sequence is identical to converted sequence.

Change the Note in paragraph 2 to normative text as follows:

If returns noconv, internT and externT are the
same type and the converted sequence is identical to the input sequence [from,from_next).
to_next is set equal to to, the value of state is
unchanged, and there are no changes to the values in [to, to_limit).

20. Thousands_sep returns wrong type

The synopsis for numpunct<>::do_thousands_sep, and the
definition of numpunct<>::thousands_sep which calls it, specify
that it returns a value of type char_type. Here it is erroneously
described as returning a "string_type".

21. Codecvt_byname<> instantiations

In the second table in the section, captioned "Required
instantiations", the instantiations for codecvt_byname<>
have been omitted. These are necessary to allow users to construct a
locale by name from facets.

Proposed resolution:

Add in 22.3.1.1.1 [locale.category] to the table captioned
"Required instantiations", in the category "ctype"
the lines

22. Member open vs. flags

The description of basic_istream<>::open leaves unanswered questions about how it
responds to or changes flags in the error status for the stream. A strict reading
indicates that it ignores the bits and does not change them, which confuses users who do
not expect eofbit and failbit to remain set after a successful open. There are three
reasonable resolutions: 1) status quo 2) fail if fail(), ignore eofbit 3) clear failbit
and eofbit on call to open().

This may seem surprising to some users, but it's just an instance
of a general rule: error flags are never cleared by the
implementation. The only way error flags are are ever cleared is if
the user explicitly clears them by hand.

The LWG believed that preserving this general rule was
important enough so that an exception shouldn't be made just for this
one case. The resolution of this issue clarifies what the LWG
believes to have been the original intent.

The current description of numeric input does not account for the
possibility of overflow. This is an implicit result of changing the
description to rely on the definition of scanf() (which fails to
report overflow), and conflicts with the documented behavior of
traditional and current implementations.

Users expect, when reading a character sequence that results in a
value unrepresentable in the specified type, to have an error
reported. The standard as written does not permit this.

Further comments from Dietmar:

I don't feel comfortable with the proposed resolution to issue 23: It
kind of simplifies the issue to much. Here is what is going on:

Currently, the behavior of numeric overflow is rather counter intuitive
and hard to trace, so I will describe it briefly:

According to 22.4.2.1.2 [facet.num.get.virtuals]
paragraph 11 failbit is set if scanf() would
return an input error; otherwise a value is converted to the rules
of scanf.

scanf() is defined in terms of fscanf().

fscanf() returns an input failure if during conversion no
character matching the conversion specification could be extracted
before reaching EOF. This is the only reason for fscanf()
to fail due to an input error and clearly does not apply to the case
of overflow.

Thus, the conversion is performed according to the rules of
fscanf() which basically says that strtod,
strtol(), etc. are to be used for the conversion.

The strtod(), strtol(), etc. functions consume as
many matching characters as there are and on overflow continue to
consume matching characters but also return a value identical to
the maximum (or minimum for signed types if there was a leading minus)
value of the corresponding type and set errno to ERANGE.

Thus, according to the current wording in the standard, overflows
can be detected! All what is to be done is to check errno
after reading an element and, of course, clearing errno
before trying a conversion. With the current wording, it can be
detected whether the overflow was due to a positive or negative
number for signed types.

Further discussion from Redmond:

The basic problem is that we've defined our behavior,
including our error-reporting behavior, in terms of C90. However,
C90's method of reporting overflow in scanf is not technically an
"input error". The strto_* functions are more precise.

There was general consensus that failbit should be set
upon overflow. We considered three options based on this:

Set failbit upon conversion error (including overflow), and
don't store any value.

Set failbit upon conversion error, and also set errno to
indicated the precise nature of the error.

Set failbit upon conversion error. If the error was due to
overflow, store +-numeric_limits<T>::max() as an
overflow indication.

Straw poll: (1) 5; (2) 0; (3) 8.

Discussed at Lillehammer. General outline of what we want the
solution to look like: we want to say that overflow is an error, and
provide a way to distinguish overflow from other kinds of errors.
Choose candidate field the same way scanf does, but don't describe
the rest of the process in terms of format. If a finite input field
is too large (positive or negative) to be represented as a finite
value, then set failbit and assign the nearest representable value.
Bill will provide wording.

Discussed at Toronto:
N2327
is in alignment with the direction we wanted to go with in Lillehammer. Bill
to work on.

Proposed resolution:

Change 22.4.2.1.2 [facet.num.get.virtuals], end of p3:

Stage 3:The result of stage 2 processing can be one ofThe sequence of chars accumulated in stage 2 (the field) is
converted to a numeric value by the rules of one of the functions declared
in the header <cstdlib>:

A sequence of chars has been accumulated in stage 2 that is
converted (according to the rules of scanf) to a value of the
type of val. This value is stored in val and ios_base::goodbit is
stored in err.For a signed integer value, the function strtoll.

The sequence of chars accumulated in stage 2 would have caused
scanf to report an input failure. ios_base::failbit is
assigned to err.For an unsigned integer value, the function strtoull.

For a floating-point value, the function strtold.

The numeric value to be stored can be one of:

zero, if the conversion function fails to convert the entire field.
ios_base::failbit is assigned to err.

the most positive representable value, if the field represents a value
too large positive to be represented in val. ios_base::failbit is assigned
to err.

the most negative representable value (zero for unsigned integer), if
the field represents a value too large negative to be represented in val.
ios_base::failbit is assigned to err.

-6- Effects: If
(str.flags()&ios_base::boolalpha)==0 then input
proceeds as it would for a long except that if a value is being
stored into val, the value is determined according to the
following: If the value to be stored is 0 then false is stored.
If the value is 1 then true is stored. Otherwise
err|=ios_base::failbit is performed and no valuetrue is
stored.and ios_base::failbit is assigned to err.

-7- Otherwise target sequences are determined "as if" by calling the
members falsename() and truename() of the facet
obtained by use_facet<numpunct<charT>
>(str.getloc()). Successive characters in the range
[in,end) (see 23.1.1) are obtained and matched
against corresponding positions in the target sequences only as
necessary to identify a unique match. The input iterator in is
compared to end only when necessary to obtain a character. If and
only if a target sequence is uniquely matched, val is set to the
corresponding value. Otherwise false is stored and ios_base::failbit
is assigned to err.

In the description of operator<< applied to strings, the standard says that uses
the smaller of os.width() and str.size(), to pad "as described in stage 3"
elsewhere; but this is inconsistent, as this allows no possibility of space for padding.

fails to demonstrate correct use of the facilities described. In
particular, it fails to use traits operators, and specifies incorrect
semantics. (E.g. it specifies skipping over the first character in the
sequence without examining it.)

The originally proposed replacement code for the example was not
correct. The LWG tried in Kona and again in Tokyo to correct it
without success. In Tokyo, an implementor reported that actual working
code ran over one page in length and was quite complicated. The LWG
decided that it would be counter-productive to include such a lengthy
example, which might well still contain errors.

27. String::erase(range) yields wrong iterator

The string::erase(iterator first, iterator last) is specified to return an element one
place beyond the next element after the last one erased. E.g. for the string
"abcde", erasing the range ['b'..'d') would yield an iterator for element 'e',
while 'd' has not been erased.

Proposed resolution:

In 21.4.6.5 [string::erase], paragraph 10, change:

Returns: an iterator which points to the element immediately following _last_ prior to
the element being erased.

to read

Returns: an iterator which points to the element pointed to by _last_ prior to the
other elements being erased.

31. Immutable locale values

Paragraph 6, says "An instance of locale is
immutable; once a facet reference is obtained from it,
...". This has caused some confusion, because locale variables
are manifestly assignable.

Proposed resolution:

In 22.3.1 [locale] replace paragraph 6

An instance of locale is immutable; once a facet
reference is obtained from it, that reference remains usable as long
as the locale value itself exists.

with

Once a facet reference is obtained from a locale object by
calling use_facet<>, that reference remains usable, and the
results from member functions of it may be cached and re-used, as
long as some locale object refers to that facet.

32. Pbackfail description inconsistent

The description of the required state before calling virtual member
basic_streambuf<>::pbackfail requirements is inconsistent with the conditions
described in 27.5.2.2.4 [lib.streambuf.pub.pback] where member sputbackc calls it.
Specifically, the latter says it calls pbackfail if:

traits::eq(c,gptr()[-1]) is false

where pbackfail claims to require:

traits::eq(*gptr(),traits::to_char_type(c)) returns false

It appears that the pbackfail description is wrong.

Proposed resolution:

In 27.6.3.4.4 [streambuf.virt.pback], paragraph 1, change:

"traits::eq(*gptr(),traits::to_char_type( c))"

to

"traits::eq(traits::to_char_type(c),gptr()[-1])"

Rationale:

Note deliberate reordering of arguments for clarity in addition to the correction of
the argument value.

36. Iword & pword storage lifetime omitted

In the definitions for ios_base::iword and pword, the lifetime of the storage is
specified badly, so that an implementation which only keeps the last value stored appears
to conform. In particular, it says:

The reference returned may become invalid after another call to the object's iword
member with a different index ...

This is not idle speculation; at least one implementation was done this way.

Proposed resolution:

Add in 27.5.3.5 [ios.base.storage], in both paragraph 2 and also in
paragraph 4, replace the sentence:

The reference returned may become invalid after another call to the object's iword
[pword] member with a different index, after a call to its copyfmt member, or when the
object is destroyed.

with:

The reference returned is invalid after any other operations on the object. However,
the value of the storage referred to is retained, so that until the next call to copyfmt,
calling iword [pword] with the same index yields another reference to the same value.

38. Facet definition incomplete

It has been noticed by Esa Pulkkinen that the definition of
"facet" is incomplete. In particular, a class derived from
another facet, but which does not define a member id, cannot
safely serve as the argument F to use_facet<F>(loc),
because there is no guarantee that a reference to the facet instance
stored in loc is safely convertible to F.

Proposed resolution:

In the definition of std::use_facet<>(), replace the text in paragraph 1 which
reads:

Get a reference to a facet of a locale.

with:

Requires: Facet is a facet class whose definition
contains the public static member id as defined in 22.3.1.1.2 [locale.facet].

[
Kona: strike as overspecification the text "(not inherits)"
from the original resolution, which read "... whose definition
contains (not inherits) the public static member
id..."
]

40. Meaningless normative paragraph in examples

Paragraph 3 of the locale examples is a description of part of an
implementation technique that has lost its referent, and doesn't mean
anything.

Proposed resolution:

Delete 22.4.8 [facets.examples] paragraph 3 which begins "This
initialization/identification system depends...", or (at the
editor's option) replace it with a place-holder to keep the paragraph
numbering the same.

41. Ios_base needs clear(), exceptions()

The description of ios_base::iword() and pword() in 27.5.3.4 [ios.members.static], say that if they fail, they "set badbit,
which may throw an exception". However, ios_base offers no
interface to set or to test badbit; those interfaces are defined in
basic_ios<>.

Proposed resolution:

Change the description in 27.5.3.5 [ios.base.storage] in
paragraph 2, and also in paragraph 4, as follows. Replace

If the function fails it sets badbit, which may throw an exception.

with

If the function fails, and *this is a base sub-object of
a basic_ios<> object or sub-object, the effect is
equivalent to calling basic_ios<>::setstate(badbit)
on the derived object (which may throw failure).

specifies an Allocator argument default value that is
counter-intuitive. The natural choice for a the allocator to copy from
is str.get_allocator(). Though this cannot be expressed in
default-argument notation, overloading suffices.

Alternatively, the other containers in Clause 23 (deque, list,
vector) do not have this form of constructor, so it is inconsistent,
and an evident source of confusion, for basic_string<> to have
it, so it might better be removed.

Proposed resolution:

In 21.4 [basic.string], replace the declaration of the copy
constructor as follows:

44. Iostreams use operator== on int_type values

Many of the specifications for iostreams specify that character
values or their int_type equivalents are compared using operators ==
or !=, though in other places traits::eq() or traits::eq_int_type is
specified to be used throughout. This is an inconsistency; we should
change uses of == and != to use the traits members instead.

Fixing this issue highlights another sloppyness in
lib.istream.unformatted paragraph 24: this clause mentions a "character"
which is then compared to an 'int_type' (see item 5. in the list
below). It is not clear whether this requires explicit words and
if so what these words are supposed to be. A similar issue exists,
BTW, for operator*() of istreambuf_iterator which returns the result
of sgetc() as a character type (see lib.istreambuf.iterator::op*
paragraph 1), and for operator++() of istreambuf_iterator which
passes the result of sbumpc() to a constructor taking a char_type
(see lib.istreambuf.iterator::operator++ paragraph 3). Similarily, the
assignment operator ostreambuf_iterator passes a char_type to a function
taking an int_type (see lib.ostreambuf.iter.ops paragraph 1).

It is inconsistent to use comparisons using the traits functions in
Chapter 27 while not using them in Chapter 21, especially as some
of the inconsistent uses actually involve streams (eg. getline() on
streams). To avoid leaving this issue open still longer due to this
inconsistency (it is open since 1998), a list of changes to Chapter
21 is below.

In Chapter 24 there are several places with statements like "the end
of stream is reached (streambuf_type::sgetc() returns traits::eof())"
(lib.istreambuf.iterator paragraph 1, lib.ostreambuf.iter.ops
paragraph 5). It is unclear whether these should be clarified to use
traits::eq_int_type() for detecting traits::eof().

47. Imbue() and getloc() Returns clauses swapped

Section 27.4.2.3 specifies how imbue() and getloc() work. That
section has two RETURNS clauses, and they make no sense as
stated. They make perfect sense, though, if you swap them. Am I
correct in thinking that paragraphs 2 and 4 just got mixed up by
accident?

49. Underspecification of ios_base::sync_with_stdio

(1) 27.4.2.4 doesn't say what ios_base::sync_with_stdio(f)
returns. Does it return f, or does it return the previous
synchronization state? My guess is the latter, but the standard
doesn't say so.

(2) 27.4.2.4 doesn't say what it means for streams to be
synchronized with stdio. Again, of course, I can make some
guesses. (And I'm unhappy about the performance implications of those
guesses, but that's another matter.)

Proposed resolution:

Change the following sentence in 27.5.3.4 [ios.members.static]
returns clause from:

true if the standard iostream objects (27.3) are
synchronized and otherwise returns false.

to:

true if the previous state of the standard iostream
objects (27.3) was synchronized and otherwise returns
false.

Add the following immediately after 27.5.3.4 [ios.members.static],
paragraph 2:

When a standard iostream object str is synchronized with a
standard stdio stream f, the effect of inserting a character c by

fputc(f, c);

is the same as the effect of

str.rdbuf()->sputc(c);

for any sequence of characters; the effect of extracting a
character c by

c = fgetc(f);

is the same as the effect of:

c = str.rdbuf()->sbumpc(c);

for any sequences of characters; and the effect of pushing
back a character c by

ungetc(c, f);

is the same as the effect of

str.rdbuf()->sputbackc(c);

for any sequence of characters. [Footnote: This implies
that operations on a standard iostream object can be mixed arbitrarily
with operations on the corresponding stdio stream. In practical
terms, synchronization usually means that a standard iostream object
and a standard stdio object share a buffer. --End Footnote]

[pre-Copenhagen: PJP and Matt contributed the definition
of "synchronization"]

[post-Copenhagen: proposed resolution was revised slightly:
text was added in the non-normative footnote to say that operations
on the two streams can be mixed arbitrarily.]

50. Copy constructor and assignment operator of ios_base

As written, ios_base has a copy constructor and an assignment
operator. (Nothing in the standard says it doesn't have one, and all
classes have copy constructors and assignment operators unless you
take specific steps to avoid them.) However, nothing in 27.4.2 says
what the copy constructor and assignment operator do.

My guess is that this was an oversight, that ios_base is, like
basic_ios, not supposed to have a copy constructor or an assignment
operator.

Jerry Schwarz comments: Yes, its an oversight, but in the opposite
sense to what you're suggesting. At one point there was a definite
intention that you could copy ios_base. It's an easy way to save the
entire state of a stream for future use. As you note, to carry out
that intention would have required a explicit description of the
semantics (e.g. what happens to the iarray and parray stuff).

Proposed resolution:

In 27.5.3 [ios.base], class ios_base, specify the copy
constructor and operator= members as being private.

Rationale:

The LWG believes the difficulty of specifying correct semantics
outweighs any benefit of allowing ios_base objects to be copyable.

The std::sort algorithm can in general only sort a given sequence
by moving around values. The list<>::sort() member on the other
hand could move around values or just update internal pointers. Either
method can leave iterators into the list<> dereferencable, but
they would point to different things.

Does the FDIS mandate anywhere which method should be used for
list<>::sort()?

Matt Austern comments:

I think you've found an omission in the standard.

The library working group discussed this point, and there was
supposed to be a general requirement saying that list, set, map,
multiset, and multimap may not invalidate iterators, or change the
values that iterators point to, except when an operation does it
explicitly. So, for example, insert() doesn't invalidate any iterators
and erase() and remove() only invalidate iterators pointing to the
elements that are being erased.

I looked for that general requirement in the FDIS, and, while I
found a limited form of it for the sorted associative containers, I
didn't find it for list. It looks like it just got omitted.

The intention, though, is that list<>::sort does not
invalidate any iterators and does not change the values that any
iterator points to. There would be no reason to have the member
function otherwise.

Proposed resolution:

Add a new paragraph at the end of 23.1:

Unless otherwise specified (either explicitly or by defining a function in terms of
other functions), invoking a container member function or passing a container as an
argument to a library function shall not invalidate iterators to, or change the values of,
objects within that container.

Rationale:

This was US issue CD2-23-011; it was accepted in London but the
change was not made due to an editing oversight. The wording in the
proposed resolution below is somewhat updated from CD2-23-011,
particularly the addition of the phrase "or change the values
of"

Second, 27.5.4.2 [fpos.operations] table 88 . There are a couple
different things wrong with it, some of which I've already discussed
with Jerry, but the most obvious mechanical sort of error is that it
uses expressions like P(i) and p(i), without ever defining what sort
of thing "i" is.

(The other problem is that it requires support for streampos
arithmetic. This is impossible on some systems, i.e. ones where file
position is a complicated structure rather than just a number. Jerry
tells me that the intention was to require syntactic support for
streampos arithmetic, but that it wasn't actually supposed to do
anything meaningful except on platforms, like Unix, where genuine
arithmetic is possible.)

The LWG reviewed the additional question of whether or not
rdbuf(0) may set badbit. The answer is
clearly yes; it may be set via clear(). See 27.5.5.3 [basic.ios.members], paragraph 6. This issue was reviewed at length
by the LWG, which removed from the original proposed resolution a
footnote which incorrectly said "rdbuf(0) does not set
badbit".

54. Basic_streambuf's destructor

The class synopsis for basic_streambuf shows a (virtual)
destructor, but the standard doesn't say what that destructor does. My
assumption is that it does nothing, but the standard should say so
explicitly.

Proposed resolution:

Add after 27.6.3.1 [streambuf.cons] paragraph 2:

virtual ~basic_streambuf();

Effects: None.

55. Invalid stream position is undefined

Several member functions in clause 27 are defined in certain
circumstances to return an "invalid stream position", a term
that is defined nowhere in the standard. Two places (27.5.2.4.2,
paragraph 4, and 27.8.1.4, paragraph 15) contain a cross-reference to
a definition in _lib.iostreams.definitions_, a nonexistent
section.

I suspect that the invalid stream position is just supposed to be
pos_type(-1). Probably best to say explicitly in (for example)
27.5.2.4.2 that the return value is pos_type(-1), rather than to use
the term "invalid stream position", define that term
somewhere, and then put in a cross-reference.

The phrase "invalid stream position" appears ten times in
the C++ Standard. In seven places it refers to a return value, and it
should be changed. In three places it refers to an argument, and it
should not be changed. Here are the three places where "invalid
stream position" should not be changed:

57. Mistake in char_traits

21.1.3.2, paragraph 3, says "The types streampos and
wstreampos may be different if the implementation supports no shift
encoding in narrow-oriented iostreams but supports one or more shift
encodings in wide-oriented streams".

That's wrong: the two are the same type. The <iosfwd> summary
in 27.2 says that streampos and wstreampos are, respectively, synonyms
for fpos<char_traits<char>::state_type> and
fpos<char_traits<wchar_t>::state_type>, and, flipping back
to clause 21, we see in 21.1.3.1 and 21.1.3.2 that
char_traits<char>::state_type and
char_traits<wchar_t>::state_type must both be mbstate_t.

Proposed resolution:

Remove the sentence in 21.2.3.4 [char.traits.specializations.wchar.t] paragraph 3 which
begins "The types streampos and wstreampos may be
different..." .

59. Ambiguity in specification of gbump

27.5.2.3.1 says that basic_streambuf::gbump() "Advances the
next pointer for the input sequence by n."

The straightforward interpretation is that it is just gptr() +=
n. An alternative interpretation, though, is that it behaves as if it
calls sbumpc n times. (The issue, of course, is whether it might ever
call underflow.) There is a similar ambiguity in the case of
pbump.

Paragraph 1 of 27.6.1.2.1 contains general requirements for all
formatted input functions. Some of the functions defined in section
27.6.1.2 explicitly say that those requirements apply ("Behaves
like a formatted input member (as described in 27.6.1.2.1)"), but
others don't. The question: is 27.6.1.2.1 supposed to apply to
everything in 27.6.1.2, or only to those member functions that
explicitly say "behaves like a formatted input member"? Or
to put it differently: are we to assume that everything that appears
in a section called "Formatted input functions" really is a
formatted input function? I assume that 27.6.1.2.1 is intended to
apply to the arithmetic extractors (27.6.1.2.2), but I assume that it
is not intended to apply to extractors like

basic_istream& operator>>(basic_istream& (*pf)(basic_istream&));

and

basic_istream& operator>>(basic_streammbuf*);

There is a similar ambiguity for unformatted input, formatted output, and unformatted
output.

Comments from Judy Ward: It seems like the problem is that the
basic_istream and basic_ostream operator <<()'s that are used
for the manipulators and streambuf* are in the wrong section and
should have their own separate section or be modified to make it clear
that the "Common requirements" listed in section 27.6.1.2.1
(for basic_istream) and section 27.6.2.5.1 (for basic_ostream) do not
apply to them.

Additional comments from Dietmar Kühl: It appears to be somewhat
nonsensical to consider the functions defined in 27.7.2.2.3 [istream::extractors] paragraphs 1 to 5 to be "Formatted input
function" but since these functions are defined in a section
labeled "Formatted input functions" it is unclear to me
whether these operators are considered formatted input functions which
have to conform to the "common requirements" from 27.7.2.2.1 [istream.formatted.reqmts]: If this is the case, all manipulators, not
just ws, would skip whitespace unless noskipws is
set (... but setting noskipws using the manipulator syntax
would also skip whitespace :-)

It is not clear which functions
are to be considered unformatted input functions. As written, it seems
that all functions in 27.7.2.3 [istream.unformatted] are unformatted input
functions. However, it does not really make much sense to construct a
sentry object for gcount(), sync(), ... Also it is
unclear what happens to the gcount() if
eg. gcount(), putback(), unget(), or
sync() is called: These functions don't extract characters,
some of them even "unextract" a character. Should this still
be reflected in gcount()? Of course, it could be read as if
after a call to gcount()gcount() return 0
(the last unformatted input function, gcount(), didn't
extract any character) and after a call to putback()gcount() returns -1 (the last unformatted input
function putback() did "extract" back into the
stream). Correspondingly for unget(). Is this what is
intended? If so, this should be clarified. Otherwise, a corresponding
clarification should be used.

Proposed resolution:

In 27.6.1.2.2 [lib.istream.formatted.arithmetic], paragraph 1.
Change the beginning of the second sentence from "The conversion
occurs" to "These extractors behave as formatted input functions (as
described in 27.6.1.2.1). After a sentry object is constructed,
the conversion occurs"

In 27.6.1.2.3, [lib.istream::extractors], before paragraph 1.
Add an effects clause. "Effects: None. This extractor does
not behave as a formatted input function (as described in
27.6.1.2.1).

In 27.6.1.2.3, [lib.istream::extractors], paragraph 2. Change the
effects clause to "Effects: Calls pf(*this). This extractor does not
behave as a formatted input function (as described in 27.6.1.2.1).

In 27.6.1.2.3, [lib.istream::extractors], paragraph 4. Change the
effects clause to "Effects: Calls pf(*this). This extractor does not
behave as a formatted input function (as described in 27.6.1.2.1).

In 27.6.1.2.3, [lib.istream::extractors], paragraph 12. Change the
first two sentences from "If sb is null, calls setstate(failbit),
which may throw ios_base::failure (27.4.4.3). Extracts characters
from *this..." to "Behaves as a formatted input function (as described
in 27.6.1.2.1). If sb is null, calls setstate(failbit), which may
throw ios_base::failure (27.4.4.3). After a sentry object is
constructed, extracts characters from *this...".

In 27.6.1.3, [lib.istream.unformatted], before paragraph 2. Add an
effects clause. "Effects: none. This member function does not behave
as an unformatted input function (as described in 27.6.1.3, paragraph 1)."

In 27.6.1.3, [lib.istream.unformatted], paragraph 3. Change the
beginning of the first sentence of the effects clause from "Extracts a
character" to "Behaves as an unformatted input function (as described
in 27.6.1.3, paragraph 1). After constructing a sentry object, extracts a
character"

In 27.6.1.3, [lib.istream.unformatted], paragraph 5. Change the
beginning of the first sentence of the effects clause from "Extracts a
character" to "Behaves as an unformatted input function (as described
in 27.6.1.3, paragraph 1). After constructing a sentry object, extracts a
character"

In 27.6.1.3, [lib.istream.unformatted], paragraph 7. Change the
beginning of the first sentence of the effects clause from "Extracts
characters" to "Behaves as an unformatted input function (as described
in 27.6.1.3, paragraph 1). After constructing a sentry object, extracts
characters"

[No change needed in paragraph 10, because it refers to paragraph 7.]

In 27.6.1.3, [lib.istream.unformatted], paragraph 12. Change the
beginning of the first sentence of the effects clause from "Extracts
characters" to "Behaves as an unformatted input function (as described
in 27.6.1.3, paragraph 1). After constructing a sentry object, extracts
characters"

[No change needed in paragraph 15.]

In 27.6.1.3, [lib.istream.unformatted], paragraph 17. Change the
beginning of the first sentence of the effects clause from "Extracts
characters" to "Behaves as an unformatted input function (as described
in 27.6.1.3, paragraph 1). After constructing a sentry object, extracts
characters"

[No change needed in paragraph 23.]

In 27.6.1.3, [lib.istream.unformatted], paragraph 24. Change the
beginning of the first sentence of the effects clause from "Extracts
characters" to "Behaves as an unformatted input function (as described
in 27.6.1.3, paragraph 1). After constructing a sentry object, extracts
characters"

In 27.6.1.3, [lib.istream.unformatted], before paragraph 27. Add an
Effects clause: "Effects: Behaves as an unformatted input function (as
described in 27.6.1.3, paragraph 1). After constructing a sentry
object, reads but does not extract the current input character."

In 27.6.1.3, [lib.istream.unformatted], paragraph 28. Change the
first sentence of the Effects clause from "If !good() calls" to
Behaves as an unformatted input function (as described in 27.6.1.3,
paragraph 1). After constructing a sentry object, if !good() calls"

In 27.6.1.3, [lib.istream.unformatted], paragraph 30. Change the
first sentence of the Effects clause from "If !good() calls" to
"Behaves as an unformatted input function (as described in 27.6.1.3,
paragraph 1). After constructing a sentry object, if !good() calls"

In 27.6.1.3, [lib.istream.unformatted], paragraph 32. Change the
first sentence of the Effects clause from "If !good() calls..." to
"Behaves as an unformatted input function (as described in 27.6.1.3,
paragraph 1). After constructing a sentry object, if !good()
calls..." Add a new sentence to the end of the Effects clause:
"[Note: this function extracts no characters, so the value returned
by the next call to gcount() is 0.]"

In 27.6.1.3, [lib.istream.unformatted], paragraph 34. Change the
first sentence of the Effects clause from "If !good() calls" to
"Behaves as an unformatted input function (as described in 27.6.1.3,
paragraph 1). After constructing a sentry object, if !good() calls".
Add a new sentence to the end of the Effects clause: "[Note: this
function extracts no characters, so the value returned by the next
call to gcount() is 0.]"

In 27.6.1.3, [lib.istream.unformatted], paragraph 36. Change the
first sentence of the Effects clause from "If !rdbuf() is" to "Behaves
as an unformatted input function (as described in 27.6.1.3, paragraph
1), except that it does not count the number of characters extracted
and does not affect the value returned by subsequent calls to
gcount(). After constructing a sentry object, if rdbuf() is"

In 27.6.1.3, [lib.istream.unformatted], before paragraph 37. Add an
Effects clause: "Effects: Behaves as an unformatted input function (as
described in 27.6.1.3, paragraph 1), except that it does not count the
number of characters extracted and does not affect the value returned
by subsequent calls to gcount()." Change the first sentence of
paragraph 37 from "if fail()" to "after constructing a sentry object,
if fail()".

In 27.6.1.3, [lib.istream.unformatted], paragraph 38. Change the
first sentence of the Effects clause from "If fail()" to "Behaves
as an unformatted input function (as described in 27.6.1.3, paragraph
1), except that it does not count the number of characters extracted
and does not affect the value returned by subsequent calls to
gcount(). After constructing a sentry object, if fail()

In 27.6.1.3, [lib.istream.unformatted], paragraph 40. Change the
first sentence of the Effects clause from "If fail()" to "Behaves
as an unformatted input function (as described in 27.6.1.3, paragraph
1), except that it does not count the number of characters extracted
and does not affect the value returned by subsequent calls to
gcount(). After constructing a sentry object, if fail()

In 27.6.2.5.2 [lib.ostream.inserters.arithmetic], paragraph 1. Change
the beginning of the third sentence from "The formatting conversion"
to "These extractors behave as formatted output functions (as
described in 27.6.2.5.1). After the sentry object is constructed, the
conversion occurs".

In 27.6.2.5.3 [lib.ostream.inserters], before paragraph 1. Add an
effects clause: "Effects: None. Does not behave as a formatted output
function (as described in 27.6.2.5.1).".

In 27.6.2.5.3 [lib.ostream.inserters], paragraph 2. Change the
effects clause to "Effects: calls pf(*this). This extractor does not
behave as a formatted output function (as described in 27.6.2.5.1).".

In 27.6.2.5.3 [lib.ostream.inserters], paragraph 4. Change the
effects clause to "Effects: calls pf(*this). This extractor does not
behave as a formatted output function (as described in 27.6.2.5.1).".

In 27.6.2.5.3 [lib.ostream.inserters], paragraph 6. Change the first
sentence from "If sb" to "Behaves as a formatted output function (as
described in 27.6.2.5.1). After the sentry object is constructed, if
sb".

In 27.6.2.6 [lib.ostream.unformatted], paragraph 2. Change the first
sentence from "Inserts the character" to "Behaves as an unformatted
output function (as described in 27.6.2.6, paragraph 1). After
constructing a sentry object, inserts the character".

In 27.6.2.6 [lib.ostream.unformatted], paragraph 5. Change the first
sentence from "Obtains characters" to "Behaves as an unformatted
output function (as described in 27.6.2.6, paragraph 1). After
constructing a sentry object, obtains characters".

In 27.6.2.6 [lib.ostream.unformatted], paragraph 7. Add a new
sentence at the end of the paragraph: "Does not behave as an
unformatted output function (as described in 27.6.2.6, paragraph 1)."

Rationale:

See J16/99-0043==WG21/N1219, Proposed Resolution to Library Issue 60,
by Judy Ward and Matt Austern. This proposed resolution is section
VI of that paper.

The introduction to the section on unformatted input (27.6.1.3)
says that every unformatted input function catches all exceptions that
were thrown during input, sets badbit, and then conditionally rethrows
the exception. That seems clear enough. Several of the specific
functions, however, such as get() and read(), are documented in some
circumstances as setting eofbit and/or failbit. (The standard notes,
correctly, that setting eofbit or failbit can sometimes result in an
exception being thrown.) The question: if one of these functions
throws an exception triggered by setting failbit, is this an exception
"thrown during input" and hence covered by 27.6.1.3, or does
27.6.1.3 only refer to a limited class of exceptions? Just to make
this concrete, suppose you have the following snippet.

Now suppose we reach EOF before we've read N characters. What
iostate bits can we expect to be set, and what exception (if any) will
be thrown?

Proposed resolution:

In 27.6.1.3, paragraph 1, after the sentence that begins
"If an exception is thrown...", add the following
parenthetical comment: "(Exceptions thrown from
basic_ios<>::clear() are not caught or rethrown.)"

Rationale:

The LWG looked to two alternative wordings, and choose the proposed
resolution as better standardese.

Clause 27 details an exception-handling policy for formatted input,
unformatted input, and formatted output. It says nothing for
unformatted output (27.6.2.6). 27.6.2.6 should either include the same
kind of exception-handling policy as in the other three places, or
else it should have a footnote saying that the omission is
deliberate.

Proposed resolution:

In 27.6.2.6, paragraph 1, replace the last sentence ("In any
case, the unformatted output function ends by destroying the sentry
object, then returning the value specified for the formatted output
function.") with the following text:

If an exception is thrown during output, then ios::badbit is
turned on [Footnote: without causing an ios::failure to be
thrown.] in *this's error state. If (exceptions() &
badbit) != 0 then the exception is rethrown. In any case, the
unformatted output function ends by destroying the sentry object,
then, if no exception was thrown, returning the value specified for
the formatted output function.

Rationale:

This exception-handling policy is consistent with that of formatted
input, unformatted input, and formatted output.

If the function inserts no characters, it calls
setstate(failbit), which may throw
ios_base::failure (27.4.4.3). If it inserted no characters
because it caught an exception thrown while extracting characters
from sb and failbit is on in exceptions()
(27.4.4.3), then the caught exception is rethrown.

D.7.1.3, paragraph 19, says that strstreambuf::setbuf
"Performs an operation that is defined separately for each class
derived from strstreambuf". This is obviously an incorrect
cut-and-paste from basic_streambuf. There are no classes derived from
strstreambuf.

Proposed resolution:

D.9.1.3 [depr.strstreambuf.virtuals], paragraph 19, replace the setbuf effects
clause which currently says "Performs an operation that is
defined separately for each class derived from strstreambuf"
with:

Effects: implementation defined, except that
setbuf(0,0) has no effect.

Extractors for char* (27.6.1.2.3) do not store a null character
after the extracted character sequence whereas the unformatted
functions like get() do. Why is this?

Comment from Jerry Schwarz: There is apparently an editing
glitch. You'll notice that the last item of the list of what stops
extraction doesn't make any sense. It was supposed to be the line that
said a null is stored.

69. Must elements of a vector be contiguous?

The issue is this: Must the elements of a vector be in contiguous memory?

(Please note that this is entirely separate from the question of
whether a vector iterator is required to be a pointer; the answer to
that question is clearly "no," as it would rule out
debugging implementations)

Proposed resolution:

Add the following text to the end of 23.3.6 [vector],
paragraph 1.

The elements of a vector are stored contiguously, meaning that if
v is a vector<T, Allocator> where T is some type
other than bool, then it obeys the identity &v[n]
== &v[0] + n for all 0 <= n < v.size().

Rationale:

The LWG feels that as a practical matter the answer is clearly
"yes". There was considerable discussion as to the best way
to express the concept of "contiguous", which is not
directly defined in the standard. Discussion included:

An operational definition similar to the above proposed resolution is
already used for valarray (26.6.2.4 [valarray.access]).

There is no need to explicitly consider a user-defined operator&
because elements must be copyconstructible (23.2 [container.requirements] para 3)
and copyconstructible (17.6.3.1 [utility.arg.requirements]) specifies
requirements for operator&.

71. Do_get_monthname synopsis missing argument

The locale facet member time_get<>::do_get_monthname
is described in 22.4.5.1.2 [locale.time.get.virtuals] with five arguments,
consistent with do_get_weekday and with its specified use by member
get_monthname. However, in the synopsis, it is specified instead with
four arguments. The missing argument is the "end" iterator
value.

Proposed resolution:

In 22.4.5.1 [locale.time.get], add an "end" argument to
the declaration of member do_monthname as follows:

75. Contradiction in codecvt::length's argument types

The class synopses for classes codecvt<> (22.2.1.5)
and codecvt_byname<> (22.2.1.6) say that the first
parameter of the member functions length and
do_length is of type const stateT&. The member
function descriptions, however (22.2.1.5.1, paragraph 6; 22.2.1.5.2,
paragraph 9) say that the type is stateT&. Either the
synopsis or the summary must be changed.

If (as I believe) the member function descriptions are correct,
then we must also add text saying how do_length changes its
stateT argument.

Proposed resolution:

In 22.4.1.4 [locale.codecvt], and also in 22.4.1.5 [locale.codecvt.byname],
change the stateT argument type on both member
length() and member do_length() from

const stateT&

to

stateT&

In 22.4.1.4.2 [locale.codecvt.virtuals], add to the definition for member
do_length a paragraph:

Effects: The effect on the state argument is ``as if''
it called do_in(state, from, from_end, from, to, to+max,
to) for to pointing to a buffer of at least
max elements.

76. Can a codecvt facet always convert one internal character at a time?

This issue concerns the requirements on classes derived from
codecvt, including user-defined classes. What are the
restrictions on the conversion from external characters
(e.g. char) to internal characters (e.g. wchar_t)?
Or, alternatively, what assumptions about codecvt facets can
the I/O library make?

The question is whether it's possible to convert from internal
characters to external characters one internal character at a time,
and whether, given a valid sequence of external characters, it's
possible to pick off internal characters one at a time. Or, to put it
differently: given a sequence of external characters and the
corresponding sequence of internal characters, does a position in the
internal sequence correspond to some position in the external
sequence?

To make this concrete, suppose that [first, last) is a
sequence of M external characters and that [ifirst,
ilast) is the corresponding sequence of N internal
characters, where N > 1. That is, my_encoding.in(),
applied to [first, last), yields [ifirst,
ilast). Now the question: does there necessarily exist a
subsequence of external characters, [first, last_1), such
that the corresponding sequence of internal characters is the single
character *ifirst?

(What a "no" answer would mean is that
my_encoding translates sequences only as blocks. There's a
sequence of M external characters that maps to a sequence of
N internal characters, but that external sequence has no
subsequence that maps to N-1 internal characters.)

Some of the wording in the standard, such as the description of
codecvt::do_max_length (22.4.1.4.2 [locale.codecvt.virtuals],
paragraph 11) and basic_filebuf::underflow (27.9.1.5 [filebuf.virtuals], paragraph 3) suggests that it must always be
possible to pick off internal characters one at a time from a sequence
of external characters. However, this is never explicitly stated one
way or the other.

This issue seems (and is) quite technical, but it is important if
we expect users to provide their own encoding facets. This is an area
where the standard library calls user-supplied code, so a well-defined
set of requirements for the user-supplied code is crucial. Users must
be aware of the assumptions that the library makes. This issue affects
positioning operations on basic_filebuf, unbuffered input,
and several of codecvt's member functions.

Proposed resolution:

Add the following text as a new paragraph, following 22.4.1.4.2 [locale.codecvt.virtuals] paragraph 2:

A codecvt facet that is used by basic_filebuf
(27.9 [file.streams]) must have the property that if

do_out(state, from, from_end, from_next, to, to_lim, to_next)

would return ok, where from != from_end, then

do_out(state, from, from + 1, from_next, to, to_end, to_next)

must also return ok, and that if

do_in(state, from, from_end, from_next, to, to_lim, to_next)

would return ok, where to != to_lim, then

do_in(state, from, from_end, from_next, to, to + 1, to_next)

must also return ok. [Footnote: Informally, this
means that basic_filebuf assumes that the mapping from
internal to external characters is 1 to N: a codecvt that is
used by basic_filebuf must be able to translate characters
one internal character at a time. --End Footnote]

[Redmond: Minor change in proposed resolution. Original
proposed resolution talked about "success", with a parenthetical
comment that success meant returning ok. New wording
removes all talk about "success", and just talks about the
return value.]

Rationale:

The proposed resoluion says that conversions can be performed one
internal character at a time. This rules out some encodings that
would otherwise be legal. The alternative answer would mean there
would be some internal positions that do not correspond to any
external file position.

An example of an encoding that this rules out is one where the
internT and externT are of the same type, and
where the internal sequence c1 c2 corresponds to the
external sequence c2 c1.

It was generally agreed that basic_filebuf relies
on this property: it was designed under the assumption that
the external-to-internal mapping is N-to-1, and it is not clear
that basic_filebuf is implementable without that
restriction.

The proposed resolution is expressed as a restriction on
codecvt when used by basic_filebuf, rather
than a blanket restriction on all codecvt facets,
because basic_filebuf is the only other part of the
library that uses codecvt. If a user wants to define
a codecvt facet that implements a more general N-to-M
mapping, there is no reason to prohibit it, so long as the user
does not expect basic_filebuf to be able to use it.

Many string member functions throw if size is getting or exceeding
npos. However, I wonder why they don't throw if size is getting or
exceeding max_size() instead of npos. May be npos is known at compile
time, while max_size() is known at runtime. However, what happens if
size exceeds max_size() but not npos, then? It seems the standard
lacks some clarifications here.

Proposed resolution:

After 21.4 [basic.string] paragraph 4 ("The functions
described in this clause...") add a new paragraph:

For any string operation, if as a result of the operation, size() would exceed
max_size() then
the operation throws length_error.

Operator >> and getline() for strings read until eof()
in the input stream is true. However, this might never happen, if the
stream can't read anymore without reaching EOF. So shouldn't it be
changed into that it reads until !good() ?

Proposed resolution:

In 21.4.8.9 [string.io], paragraph 1, replace:

Effects: Begins by constructing a sentry object k as if k were
constructed by typename basic_istream<charT,traits>::sentry k( is). If
bool( k) is true, it calls str.erase() and then extracts characters
from is and appends them to str as if by calling str.append(1, c). If
is.width() is greater than zero, the maximum number n of characters
appended is is.width(); otherwise n is str.max_size(). Characters are
extracted and appended until any of the following occurs:

with:

Effects: Behaves as a formatted input function (27.7.2.2.1 [istream.formatted.reqmts]). After constructing a sentry object, if the
sentry converts to true, calls str.erase() and then extracts
characters from is and appends them to str as if by calling
str.append(1,c). If is.width() is greater than zero, the maximum
number n of characters appended is is.width(); otherwise n is
str.max_size(). Characters are extracted and appended until any of the
following occurs:

In 21.4.8.9 [string.io], paragraph 6, replace

Effects: Begins by constructing a sentry object k as if by typename
basic_istream<charT,traits>::sentry k( is, true). If bool( k) is true,
it calls str.erase() and then extracts characters from is and appends
them to str as if by calling str.append(1, c) until any of the
following occurs:

with:

Effects: Behaves as an unformatted input function (27.7.2.3 [istream.unformatted]), except that it does not affect the value returned
by subsequent calls to basic_istream<>::gcount(). After
constructing a sentry object, if the sentry converts to true, calls
str.erase() and then extracts characters from is and appends them to
str as if by calling str.append(1,c) until any of the following
occurs:

[Redmond: Made changes in proposed resolution. operator>>
should be a formatted input function, not an unformatted input function.
getline should not be required to set gcount, since
there is no mechanism for gcount to be set except by one of
basic_istream's member functions.]

[Curaçao: Nico agrees with proposed resolution.]

Rationale:

The real issue here is whether or not these string input functions
get their characters from a streambuf, rather than by calling an
istream's member functions, a streambuf signals failure either by
returning eof or by throwing an exception; there are no other
possibilities. The proposed resolution makes it clear that these two
functions do get characters from a streambuf.

The algorithm uses find_if() to find the first element that should
be removed. However, it then uses a copy of the passed function object
to process the resulting elements (if any). Here, Nth is used again
and removes also the sixth element. This behavior compromises the
advantage of function objects being able to have a state. Without any
cost it could be avoided (just implement it directly instead of
calling find_if()).

Proposed resolution:

Add a new paragraph following 25 [algorithms] paragraph 8:

[Note: Unless otherwise specified, algorithms that take function
objects as arguments are permitted to copy those function objects
freely. Programmers for whom object identity is important should
consider using a wrapper class that points to a noncopied
implementation object, or some equivalent solution.]

[Dublin: Pete Becker felt that this may not be a defect,
but rather something that programmers need to be educated about.
There was discussion of adding wording to the effect that the number
and order of calls to function objects, including predicates, not
affect the behavior of the function object.]

[Pre-Kona: Nico comments: It seems the problem is that we don't
have a clear statement of "predicate" in the
standard. People including me seemed to think "a function
returning a Boolean value and being able to be called by an STL
algorithm or be used as sorting criterion or ... is a
predicate". But a predicate has more requirements: It should
never change its behavior due to a call or being copied. IMHO we have
to state this in the standard. If you like, see section 8.1.4 of my
library book for a detailed discussion.]

[Kona: Nico will provide wording to the effect that "unless
otherwise specified, the number of copies of and calls to function
objects by algorithms is unspecified". Consider placing in
25 [algorithms] after paragraph 9.]

[Santa Cruz: The standard doesn't currently guarantee that
functions object won't be copied, and what isn't forbidden is
allowed. It is believed (especially since implementations that were
written in concert with the standard do make copies of function
objects) that this was intentional. Thus, no normative change is
needed. What we should put in is a non-normative note suggesting to
programmers that if they want to guarantee the lack of copying they
should use something like the ref wrapper.]

There are two problems with this. First, the return type is
specified to be "T", as opposed to something like "convertible to T".
This is too specific: we want to allow *r++ to return an lvalue.

Second, writing the semantics in terms of code misleadingly
suggests that the effects *r++ should precisely replicate the behavior
of this code, including side effects. (Does this mean that *r++
should invoke the copy constructor exactly as many times as the sample
code above would?) See issue 334 for a similar
problem.

Proposed resolution:

In Table 72 in 24.2.3 [input.iterators], change the return type
for *r++ from T to "convertible to T".

Rationale:

This issue has two parts: the return type, and the number of times
the copy constructor is invoked.

The LWG believes the the first part is a real issue. It's
inappropriate for the return type to be specified so much more
precisely for *r++ than it is for *r. In particular, if r is of
(say) type int*, then *r++ isn't int,
but int&.

The LWG does not believe that the number of times the copy
constructor is invoked is a real issue. This can vary in any case,
because of language rules on copy constructor elision. That's too
much to read into these semantics clauses.

Additionally, as Dave Abrahams pointed out (c++std-lib-13703): since
we're told (24.1/3) that forward iterators satisfy all the requirements
of input iterators, we can't impose any requirements in the Input
Iterator requirements table that forward iterators don't satisfy.

103. set::iterator is required to be modifiable, but this allows modification of keys

Set::iterator is described as implementation-defined with a
reference to the container requirement; the container requirement says
that const_iterator is an iterator pointing to const T and iterator an
iterator pointing to T.

23.1.2 paragraph 2 implies that the keys should not be modified to
break the ordering of elements. But that is not clearly
specified. Especially considering that the current standard requires
that iterator for associative containers be different from
const_iterator. Set, for example, has the following:

23.2 [container.requirements] actually requires that iterator type pointing
to T (table 65). Disallowing user modification of keys by changing the
standard to require an iterator for associative container to be the
same as const_iterator would be overkill since that will unnecessarily
significantly restrict the usage of associative container. A class to
be used as elements of set, for example, can no longer be modified
easily without either redesigning the class (using mutable on fields
that have nothing to do with ordering), or using const_cast, which
defeats requiring iterator to be const_iterator. The proposed solution
goes in line with trusting user knows what he is doing.

Other Options Evaluated:

Option A. In 23.2.4 [associative.reqmts], paragraph 2, after
first sentence, and before "In addition,...", add one line:

Modification of keys shall not change their strict weak ordering.

Option B. Add three new sentences to 23.2.4 [associative.reqmts]:

At the end of paragraph 5: "Keys in an associative container
are immutable." At the end of paragraph 6: "For
associative containers where the value type is the same as the key
type, both iterator and const_iterator are
constant iterators. It is unspecified whether or not
iterator and const_iterator are the same
type."

Option C. To 23.2.4 [associative.reqmts], paragraph 3, which
currently reads:

The phrase ``equivalence of keys'' means the equivalence relation imposed by the
comparison and not the operator== on keys. That is, two keys k1 and k2 in the same
container are considered to be equivalent if for the comparison object comp, comp(k1, k2)
== false && comp(k2, k1) == false.

add the following:

For any two keys k1 and k2 in the same container, comp(k1, k2) shall return the same
value whenever it is evaluated. [Note: If k2 is removed from the container and later
reinserted, comp(k1, k2) must still return a consistent value but this value may be
different than it was the first time k1 and k2 were in the same container. This is
intended to allow usage like a string key that contains a filename, where comp compares
file contents; if k2 is removed, the file is changed, and the same k2 (filename) is
reinserted, comp(k1, k2) must again return a consistent value but this value may be
different than it was the previous time k2 was in the container.]

Proposed resolution:

Add the following to 23.2.4 [associative.reqmts] at
the indicated location:

At the end of paragraph 3: "For any two keys k1 and k2 in the same container,
calling comp(k1, k2) shall always return the same
value."

At the end of paragraph 5: "Keys in an associative container are immutable."

At the end of paragraph 6: "For associative containers where the value type is the
same as the key type, both iterator and const_iterator are constant
iterators. It is unspecified whether or not iterator and const_iterator
are the same type."

Rationale:

Several arguments were advanced for and against allowing set elements to be
mutable as long as the ordering was not effected. The argument which swayed the
LWG was one of safety; if elements were mutable, there would be no compile-time
way to detect of a simple user oversight which caused ordering to be
modified. There was a report that this had actually happened in practice,
and had been painful to diagnose. If users need to modify elements,
it is possible to use mutable members or const_cast.

Simply requiring that keys be immutable is not sufficient, because the comparison
object may indirectly (via pointers) operate on values outside of the keys.

The types iterator and const_iterator are permitted
to be different types to allow for potential future work in which some
member functions might be overloaded between the two types. No such
member functions exist now, and the LWG believes that user functionality
will not be impaired by permitting the two types to be the same. A
function that operates on both iterator types can be defined for
const_iterator alone, and can rely on the automatic
conversion from iterator to const_iterator.

108. Lifetime of exception::what() return unspecified

In 18.6.1, paragraphs 8-9, the lifetime of the return value of
exception::what() is left unspecified. This issue has implications
with exception safety of exception handling: some exceptions should
not throw bad_alloc.

There are no versions of binders that apply to non-const elements
of a sequence. This makes examples like for_each() using bind2nd() on
page 521 of "The C++ Programming Language (3rd)"
non-conforming. Suitable versions of the binders need to be added.

[Kona: The LWG discussed this at some length.It was agreed that
this is a mistake in the design, but there was no consensus on whether
it was a defect in the Standard. Straw vote: NAD - 5. Accept
proposed resolution - 3. Leave open - 6.]

Effects: Constructs an object of class strstream, initializing the base class with
iostream(& sb) and initializing sb with one of the two constructors:

- If mode&app==0, then s shall designate the first element of an array of n
elements. The constructor is strstreambuf(s, n, s).

- If mode&app==0, then s shall designate the first element of an array of n
elements that contains an NTBS whose first element is designated by s. The constructor is
strstreambuf(s, n, s+std::strlen(s)).

Notice the second condition is the same as the first. I think the second condition
should be "If mode&app==app", or "mode&app!=0", meaning that
the append bit is set.

Proposed resolution:

In D.9.3.1 [depr.ostrstream.cons] paragraph 2 and D.9.4.1 [depr.strstream.cons]
paragraph 2, change the first condition to (mode&app)==0
and the second condition to (mode&app)!=0.

The effects clause for numeric inserters says that
insertion of a value x, whose type is either bool,
short, unsigned short, int, unsigned
int, long, unsigned long, float,
double, long double, or const void*, is
delegated to num_put, and that insertion is performed as if
through the following code fragment:

This doesn't work, because num_put<>::put is only
overloaded for the types bool, long, unsigned
long, double, long double, and const
void*. That is, the code fragment in the standard is incorrect
(it is diagnosed as ambiguous at compile time) for the types
short, unsigned short, int, unsigned
int, and float.

We must either add new member functions to num_put, or
else change the description in ostream so that it only calls
functions that are actually there. I prefer the latter.

Proposed resolution:

Replace 27.6.2.5.2, paragraph 1 with the following:

The classes num_get<> and num_put<> handle locale-dependent numeric
formatting and parsing. These inserter functions use the imbued
locale value to perform numeric formatting. When val is of type bool,
long, unsigned long, double, long double, or const void*, the
formatting conversion occurs as if it performed the following code
fragment:

[post-Toronto: This differs from the previous proposed
resolution; PJP provided the new wording. The differences are in
signed short and int output.]

Rationale:

The original proposed resolution was to cast int and short to long,
unsigned int and unsigned short to unsigned long, and float to double,
thus ensuring that we don't try to use nonexistent num_put<>
member functions. The current proposed resolution is more
complicated, but gives more expected results for hex and octal output
of signed short and signed int. (On a system with 16-bit short, for
example, printing short(-1) in hex format should yield 0xffff.)

Formatted input is defined for the types short, unsigned short, int,
unsigned int, long, unsigned long, float, double,
long double, bool, and void*. According to section 27.6.1.2.2,
formatted input of a value x is done as if by the following code fragment:

According to section 22.4.2.1.1 [facet.num.get.members], however,
num_get<>::get() is only overloaded for the types
bool, long, unsigned short, unsigned
int, unsigned long, unsigned long,
float, double, long double, and
void*. Comparing the lists from the two sections, we find
that 27.6.1.2.2 is using a nonexistent function for types
short and int.

Proposed resolution:

In 27.7.2.2.2 [istream.formatted.arithmetic] Arithmetic Extractors, remove the
two lines (1st and 3rd) which read:

operator>>(short& val);
...
operator>>(int& val);

And add the following at the end of that section (27.6.1.2.2) :

operator>>(short& val);

The conversion occurs as if performed by the following code fragment (using
the same notation as for the preceding code fragment):

120. Can an implementor add specializations?

The original issue asked whether a library implementor could
specialize standard library templates for built-in types. (This was
an issue because users are permitted to explicitly instantiate
standard library templates.)

Specializations are no longer a problem, because of the resolution
to core issue 259. Under the proposed resolution, it will be legal
for a translation unit to contain both a specialization and an
explicit instantiation of the same template, provided that the
specialization comes first. In such a case, the explicit
instantiation will be ignored. Further discussion of library issue
120 assumes that the core 259 resolution will be adopted.

However, as noted in lib-7047, one piece of this issue still
remains: what happens if a standard library implementor explicitly
instantiates a standard library templates? It's illegal for a program
to contain two different explicit instantiations of the same template
for the same type in two different translation units (ODR violation),
and the core working group doesn't believe it is practical to relax
that restriction.

The issue, then, is: are users allowed to explicitly instantiate
standard library templates for non-user defined types? The status quo
answer is 'yes'. Changing it to 'no' would give library implementors
more freedom.

This is an issue because, for performance reasons, library
implementors often need to explicitly instantiate standard library
templates. (for example, std::basic_string<char>) Does giving
users freedom to explicitly instantiate standard library templates for
non-user defined types make it impossible or painfully difficult for
library implementors to do this?

John Spicer suggests, in lib-8957, that library implementors have a
mechanism they can use for explicit instantiations that doesn't
prevent users from performing their own explicit instantiations: put
each explicit instantiation in its own object file. (Different
solutions might be necessary for Unix DSOs or MS-Windows DLLs.) On
some platforms, library implementors might not need to do anything
special: the "undefined behavior" that results from having two
different explicit instantiations might be harmless.

Proposed resolution:

Append to 17.6.4.3 [reserved.names] paragraph 1:

A program may explicitly instantiate any templates in the standard
library only if the declaration depends on the name of a user-defined
type of external linkage and the instantiation meets the standard library
requirements for the original template.

[Kona: changed the wording from "a user-defined name" to "the name of
a user-defined type"]

Rationale:

The LWG considered another possible resolution:

In light of the resolution to core issue 259, no normative changes
in the library clauses are necessary. Add the following non-normative
note to the end of 17.6.4.3 [reserved.names] paragraph 1:

[Note: A program may explicitly instantiate standard library
templates, even when an explicit instantiation does not depend on
a user-defined name. --end note]

The LWG rejected this because it was believed that it would make
it unnecessarily difficult for library implementors to write
high-quality implementations. A program may not include an
explicit instantiation of the same template, for the same template
arguments, in two different translation units. If users are
allowed to provide explicit instantiations of Standard Library
templates for built-in types, then library implementors aren't,
at least not without nonportable tricks.

The most serious problem is a class template that has writeable
static member variables. Unfortunately, such class templates are
important and, in existing Standard Library implementations, are
often explicitly specialized by library implementors: locale facets,
which have a writeable static member variable id. If a
user's explicit instantiation collided with the implementations
explicit instantiation, iostream initialization could cause locales
to be constructed in an inconsistent state.

One proposed implementation technique was for Standard Library
implementors to provide explicit instantiations in separate object
files, so that they would not be picked up by the linker when the
user also provides an explicit instantiation. However, this
technique only applies for Standard Library implementations that
are packaged as static archives. Most Standard Library
implementations nowadays are packaged as dynamic libraries, so this
technique would not apply.

The Committee is now considering standardization of dynamic
linking. If there are such changes in the future, it may be
appropriate to revisit this issue later.

122. streambuf/wstreambuf description should not say they are specializations

One of the operator= in the valarray helper arrays is const and one
is not. For example, look at slice_array. This operator= in Section
26.6.5.2 [slice.arr.assign] is const:

void operator=(const valarray<T>&) const;

but this one in Section 26.6.5.4 [slice.arr.fill] is not:

void operator=(const T&);

The description of the semantics for these two functions is similar.

Proposed resolution:

26.6.5 [template.slice.array] Template class slice_array

In the class template definition for slice_array, replace the member
function declaration

void operator=(const T&);

with

void operator=(const T&) const;

26.6.5.4 [slice.arr.fill] slice_array fill function

Change the function declaration

void operator=(const T&);

to

void operator=(const T&) const;

26.6.7 [template.gslice.array] Template class gslice_array

In the class template definition for gslice_array, replace the member
function declaration

void operator=(const T&);

with

void operator=(const T&) const;

26.6.7.4 [gslice.array.fill] gslice_array fill function

Change the function declaration

void operator=(const T&);

to

void operator=(const T&) const;

26.6.8 [template.mask.array] Template class mask_array

In the class template definition for mask_array, replace the member
function declaration

void operator=(const T&);

with

void operator=(const T&) const;

26.6.8.4 [mask.array.fill] mask_array fill function

Change the function declaration

void operator=(const T&);

to

void operator=(const T&) const;

26.6.9 [template.indirect.array] Template class indirect_array

In the class template definition for indirect_array, replace the member
function declaration

void operator=(const T&);

with

void operator=(const T&) const;

26.6.9.4 [indirect.array.fill] indirect_array fill function

Change the function declaration

void operator=(const T&);

to

void operator=(const T&) const;

[Redmond: Robert provided wording.]

Rationale:

There's no good reason for one version of operator= being const and
another one not. Because of issue 253, this now
matters: these functions are now callable in more circumstances. In
many existing implementations, both versions are already const.

125. valarray<T>::operator!() return type is inconsistent

In Section 26.6.2 [template.valarray] valarray<T>::operator!() is
declared to return a valarray<T>, but in Section 26.6.2.6 [valarray.unary] it is declared to return a valarray<bool>. The
latter appears to be correct.

Proposed resolution:

Change in Section 26.6.2 [template.valarray] the declaration of
operator!() so that the return type is
valarray<bool>.

127. auto_ptr<> conversion issues

There are two problems with the current auto_ptr wording
in the standard:

First, the auto_ptr_ref definition cannot be nested
because auto_ptr<Derived>::auto_ptr_ref is unrelated to
auto_ptr<Base>::auto_ptr_ref. Also submitted by
Nathan Myers, with the same proposed resolution.

Second, there is no auto_ptr assignment operator taking an
auto_ptr_ref argument.

I have discussed these problems with my proposal coauthor, Bill
Gibbons, and with some compiler and library implementors, and we
believe that these problems are not desired or desirable implications
of the standard.

2 Feb 2000: Lisa Lippincott comments: [The resolution of] this issue
states that the conversion from auto_ptr to auto_ptr_ref should
be const. This is not acceptable, because it would allow
initialization and assignment from _any_ const auto_ptr! It also
introduces an implementation difficulty in writing this conversion
function -- namely, somewhere along the line, a const_cast will be
necessary to remove that const so that release() may be called. This
may result in undefined behavior [7.1.5.1/4]. The conversion
operator does not have to be const, because a non-const implicit
object parameter may be bound to an rvalue [13.3.3.1.4/3]
[13.3.1/5].

Tokyo: The LWG removed the following from the proposed resolution:

In 20.9.4 [meta.unary], paragraph 2, and 20.9.4.3 [meta.unary.prop],
paragraph 2, make the conversion to auto_ptr_ref const:

Currently, the standard does not specify how seekg() and seekp()
indicate failure. They are not required to set failbit, and they can't
return an error indication because they must return *this, i.e. the
stream. Hence, it is undefined what happens if they fail. And they
can fail, for instance, when a file stream is disconnected from the
underlying file (is_open()==false) or when a wide character file
stream must perform a state-dependent code conversion, etc.

The stream functions seekg() and seekp() should set failbit in the
stream state in case of failure.

Proposed resolution:

Add to the Effects: clause of seekg() in
27.7.2.3 [istream.unformatted] and to the Effects: clause of seekp() in
27.7.3.5 [ostream.seeks]:

In case of failure, the function calls setstate(failbit) (which may throw ios_base::failure).

Table 67 (23.1.1) says that container::erase(iterator) returns an
iterator. Table 69 (23.1.2) says that in addition to this requirement,
associative containers also say that container::erase(iterator)
returns void. That's not an addition; it's a change to the
requirements, which has the effect of making associative containers
fail to meet the requirements for containers.

Proposed resolution:

In 23.2.4 [associative.reqmts], in Table 69 Associative container
requirements, change the return type of a.erase(q) from
void to iterator. Change the
assertion/not/pre/post-condition from "erases the element pointed to
by q" to "erases the element pointed to by q.
Returns an iterator pointing to the element immediately following q
prior to the element being erased. If no such element exists, a.end()
is returned."

In 23.2.4 [associative.reqmts], in Table 69 Associative container
requirements, change the return type of a.erase(q1, q2)
from void to iterator. Change the
assertion/not/pre/post-condition from "erases the elements in the
range [q1, q2)" to "erases the elements in the range [q1,
q2). Returns q2."

In 23.4.4 [map], in the map class synopsis; and
in 23.4.5 [multimap], in the multimap class synopsis; and
in 23.4.6 [set], in the set class synopsis; and
in 23.4.7 [multiset], in the multiset class synopsis:
change the signature of the first erase overload to

[
Sydney: the proposed wording went in the right direction, but it
wasn't good enough. We want to return an iterator from the range form
of erase as well as the single-iterator form. Also, the wording is
slightly different from the wording we have for sequences; there's no
good reason for having a difference. Matt provided new wording,
(reflected above) which we will review at the next meeting.
]

134. vector constructors over specified

The complexity description says: "It does at most 2N calls to the copy constructor
of T and logN reallocations if they are just input iterators ...".

This appears to be overly restrictive, dictating the precise memory/performance
tradeoff for the implementor.

Proposed resolution:

Change 23.3.6.2 [vector.cons], paragraph 1 to:

-1- Complexity: The constructor template <class
InputIterator> vector(InputIterator first, InputIterator last)
makes only N calls to the copy constructor of T (where N is the
distance between first and last) and no reallocations if iterators
first and last are of forward, bidirectional, or random access
categories. It makes order N calls to the copy constructor of T and
order logN reallocations if they are just input iterators.

Rationale:

"at most 2N calls" is correct only if the growth factor
is greater than or equal to 2.

I may be misunderstanding the intent, but should not seekg set only
the input stream and seekp set only the output stream? The description
seems to say that each should set both input and output streams. If
that's really the intent, I withdraw this proposal.

[Dublin: Dietmar Kühl thinks this is probably correct, but would
like the opinion of more iostream experts before taking action.]

[Tokyo: Reviewed by the LWG. PJP noted that although his docs are
incorrect, his implementation already implements the Proposed
Resolution.]

[Post-Tokyo: Matt Austern comments:
Is it a problem with basic_istream and basic_ostream, or is it a problem
with basic_stringbuf?
We could resolve the issue either by changing basic_istream and
basic_ostream, or by changing basic_stringbuf. I prefer the latter
change (or maybe both changes): I don't see any reason for the standard to
require that std::stringbuf s(std::string("foo"), std::ios_base::in);
s.pubseekoff(0, std::ios_base::beg); must fail.
This requirement is a bit weird. There's no similar requirement
for basic_streambuf<>::seekpos, or for basic_filebuf<>::seekoff or
basic_filebuf<>::seekpos.]

-4- In the call to use_facet<Facet>(loc), the type argument
chooses a facet, making available all members of the named type. If
Facet is not present in a locale (or, failing that, in the global
locale), it throws the standard exception bad_cast. A C++ program can
check if a locale implements a particular facet with the template
function has_facet<Facet>().

This contradicts the specification given in section
22.3.2 [locale.global.templates]:

template <class Facet> const Facet& use_facet(const
locale& loc);

-1- Get a reference to a facet of a locale.
-2- Returns: a reference to the corresponding facet of loc, if present.
-3- Throws: bad_cast if has_facet<Facet>(loc) is false.
-4- Notes: The reference returned remains valid at least as long as any copy of loc exists

Proposed resolution:

Remove the phrase "(or, failing that, in the global locale)"
from section 22.1.1.

Rationale:

Needed for consistency with the way locales are handled elsewhere
in the standard.

A. It says ``The operations in table 68 are provided only for the containers for which
they take constant time.''

That could be interpreted in two ways, one of them being ``Even though table 68 shows
particular operations as being provided, implementations are free to omit them if they
cannot implement them in constant time.''

B. That paragraph says nothing about amortized constant time, and it should.

Table 68 lists sequence operations that are provided for some types of sequential
containers but not others. An implementation shall provide these operations for all
container types shown in the ``container'' column, and shall implement them so as to take
amortized constant time.

144. Deque constructor complexity wrong

In 23.3.3.2 [deque.cons] paragraph 6, the deque ctor that takes an iterator range appears
to have complexity requirements which are incorrect, and which contradict the
complexity requirements for insert(). I suspect that the text in question,
below, was taken from vector:

Complexity: If the iterators first and last are forward iterators,
bidirectional iterators, or random access iterators the constructor makes only
N calls to the copy constructor, and performs no reallocations, where N is
last - first.

The word "reallocations" does not really apply to deque. Further,
all of the following appears to be spurious:

It makes at most 2N calls to the copy constructor of T and log N
reallocations if they are input iterators.1)

1) The complexity is greater in the case of input iterators because each
element must be added individually: it is impossible to determine the distance
between first abd last before doing the copying.

This makes perfect sense for vector, but not for deque. Why should deque gain
an efficiency advantage from knowing in advance the number of elements to
insert?

Proposed resolution:

In 23.3.3.2 [deque.cons] paragraph 6, replace the Complexity description, including the
footnote, with the following text (which also corrects the "abd"
typo):

Effects: Extracts a complex number x of the form: u, (u), or (u,v),
where u is the real part and v is the imaginary part
(lib.istream.formatted).
Requires: The input values be convertible to T. If bad input is
encountered, calls is.setstate(ios::failbit) (which may throw
ios::failure (lib.iostate.flags).
Returns: is .

Is it intended that the extractor for complex numbers does not skip
whitespace, unlike all other extractors in the standard library do?
Shouldn't a sentry be used?

Is it intended that the inserter for complex numbers ignores the
field width and does not do any padding? If, with the suggested
implementation above, the field width were set in the stream then the
opening parentheses would be adjusted, but the rest not, because the
field width is reset to zero after each insertion.

I think that both operations should use sentries, for sake of
consistency with the other inserters and extractors in the
library. Regarding the issue of padding in the inserter, I don't know
what the intent was.

The library had many global functions until 17.4.1.1 [lib.contents]
paragraph 2 was added:

All library entities except macros, operator new and operator
delete are defined within the namespace std or namespaces nested
within namespace std.

It appears "global function" was never updated in the following:

17.4.4.3 - Global functions [lib.global.functions]

-1- It is unspecified whether any global functions in the C++ Standard
Library are defined as inline (dcl.fct.spec).

-2- A call to a global function signature described in Clauses
lib.language.support through lib.input.output behaves the same as if
the implementation declares no additional global function
signatures.*

[Footnote: A valid C++ program always calls the expected library
global function. An implementation may also define additional
global functions that would otherwise not be called by a valid C++
program. --- end footnote]

-3- A global function cannot be declared by the implementation as
taking additional default arguments.

17.4.4.4 - Member functions [lib.member.functions]

-2- An implementation can declare additional non-virtual member
function signatures within a class:

-- by adding arguments with default values to a member function
signature; The same latitude does not extend to the implementation of
virtual or global functions, however.

148. Functions in the example facet BoolNames should be const

In 22.4.8 [facets.examples] paragraph 13, the do_truename() and
do_falsename() functions in the example facet BoolNames should be
const. The functions they are overriding in
numpunct_byname<char> are const.

Proposed resolution:

In 22.4.8 [facets.examples] paragraph 13, insert "const" in
two places:

Suppose that c and c1 are sequential containers and i is an
iterator that refers to an element of c. Then I can insert a copy of
c1's elements into c ahead of element i by executing

c.insert(i, c1.begin(), c1.end());

If c is a vector, it is fairly easy for me to find out where the
newly inserted elements are, even though i is now invalid:

size_t i_loc = i - c.begin();
c.insert(i, c1.begin(), c1.end());

and now the first inserted element is at c.begin()+i_loc and one
past the last is at c.begin()+i_loc+c1.size().

But what if c is a list? I can still find the location of one
past the last inserted element, because i is still valid.
To find the location of the first inserted element, though,
I must execute something like

But I think the right solution is to change the definition of insert
so that instead of returning void, it returns an iterator that refers
to the first element inserted, if any, and otherwise is a copy of its
first argument.

[
Summit:
]

Reopened by Alisdair.

[
Post Summit Alisdair adds:
]

In addition to the original rationale for C++03, this change also gives a
consistent interface for all container insert operations i.e. they all
return an iterator to the (first) inserted item.

Proposed wording provided.

[
2009-07 Frankfurt
]

Q: why isn't this change also proposed for associative containers?

A: The returned iterator wouldn't necessarily point to a contiguous range.

Moved to Ready.

Proposed resolution:

23.2.3 [sequence.reqmts] Table 83
change return type from void to iterator for the following rows:

Table 83 — Sequence container requirements (in addition to container)

Expression

Return type

Assertion/note pre-/post-condition

a.insert(p,n,t)

voiditerator

Inserts n copies of t before p.

a.insert(p,i,j)

voiditerator

Each iterator in the range [i,j) shall be
dereferenced exactly once.
pre: i and j are not iterators into a.
Inserts copies of elements in [i, j) before p

a.insert(p,il)

voiditerator

a.insert(p, il.begin(), il.end()).

Add after p6 23.2.3 [sequence.reqmts]:

-6- ...

The iterator returned from a.insert(p,n,t) points to the copy of the
first element inserted into a, or p if n == 0.

The iterator returned from a.insert(p,i,j) points to the copy of the
first element inserted into a, or p if i == j.

The iterator returned from a.insert(p,il) points to the copy of the
first element inserted into a, or p if il is empty.

For both sequences and associative containers, a.clear() has the
semantics of erase(a.begin(),a.end()), which is undefined for an empty
container since erase(q1,q2) requires that q1 be dereferenceable
(23.1.1,3 and 23.1.2,7). When the container is empty, a.begin() is
not dereferenceable.

The requirement that q1 be unconditionally dereferenceable causes many
operations to be intuitively undefined, of which clearing an empty
container is probably the most dire.

Since q1 and q2 are only referenced in the range [q1, q2), and [q1,
q2) is required to be a valid range, stating that q1 and q2 must be
iterators or certain kinds of iterators is unnecessary.

The semantics of scan_is() (paragraphs 4 and 6) is not exactly described
because there is no function is() which only takes a character as
argument. Also, in the effects clause (paragraph 3), the semantic is also kept
vague.

The description of the array version of narrow() (in
paragraph 11) is flawed: There is no member do_narrow() which
takes only three arguments because in addition to the range a default
character is needed.

Additionally, for both widen and narrow we have
two signatures followed by a Returns clause that only addresses
one of them.

[Kona: 1) the problem occurs in additional places, 2) a user
defined version could be different.]

[Post-Tokyo: Dietmar provided the above wording at the request of
the LWG. He could find no other places the problem occurred. He
asks for clarification of the Kona "a user defined
version..." comment above. Perhaps it was a circuitous way of
saying "dfault" needed to be uncommented?]

[Post-Toronto: the issues list maintainer has merged in the
proposed resolution from issue 207, which addresses the
same paragraphs.]

The table in paragraph 7 for the length modifier does not list the length
modifier l to be applied if the type is double. Thus, the
standard asks the implementation to do undefined things when using scanf()
(the missing length modifier for scanf() when scanning doubles
is actually a problem I found quite often in production code, too).

Proposed resolution:

In 22.4.2.1.2 [facet.num.get.virtuals], paragraph 7, add a row in the Length
Modifier table to say that for double a length modifier
l is to be used.

Rationale:

The standard makes an embarrassing beginner's mistake.

155. Typo in naming the class defining the class Init

There are conflicting statements about where the class
Init is defined. According to 27.4 [iostream.objects] paragraph 2
it is defined as basic_ios::Init, according to 27.5.3 [ios.base] it is defined as ios_base::Init.

156. Typo in imbue() description

There is a small discrepancy between the declarations of
imbue(): in 27.5.3 [ios.base] the argument is passed as
locale const& (correct), in 27.5.3.3 [ios.base.locales] it
is passed as locale const (wrong).

Proposed resolution:

In 27.5.3.3 [ios.base.locales] change the imbue argument
from "locale const" to "locale
const&".

The default behavior of setbuf() is described only for the
situation that gptr() != 0 && gptr() != egptr():
namely to do nothing. What has to be done in other situations
is not described although there is actually only one reasonable
approach, namely to do nothing, too.

Since changing the buffer would almost certainly mess up most
buffer management of derived classes unless these classes do it
themselves, the default behavior of setbuf() should always be
to do nothing.

159. Strange use of underflow()

The description of the meaning of the result of
showmanyc() seems to be rather strange: It uses calls to
underflow(). Using underflow() is strange because
this function only reads the current character but does not extract
it, uflow() would extract the current character. This should
be fixed to use sbumpc() instead.

Proposed resolution:

Change 27.6.3.4.3 [streambuf.virt.get] paragraph 1,
showmanyc()returns clause, by replacing the word
"supplied" with the words "extracted from the
stream".

164. do_put() has apparently unused fill argument

In 22.4.5.3.2 [locale.time.put.virtuals] the do_put() function is specified
as taking a fill character as an argument, but the description of the
function does not say whether the character is used at all and, if so,
in which way. The same holds for any format control parameters that
are accessible through the ios_base& argument, such as the
adjustment or the field width. Is strftime() supposed to use the fill
character in any way? In any case, the specification of
time_put.do_put() looks inconsistent to me.

Is the
signature of do_put() wrong, or is the effects clause incomplete?

Proposed resolution:

Add the following note after 22.4.5.3.2 [locale.time.put.virtuals]
paragraph 2:

[Note: the fill argument may be used in the implementation-defined formats, or by derivations. A space character is a reasonable default
for this argument. --end Note]

Rationale:

The LWG felt that while the normative text was correct,
users need some guidance on what to pass for the fill
argument since the standard doesn't say how it's used.

165. xsputn(), pubsync() never called by basic_ostream members?

Paragraph 2 explicitly states that none of the basic_ostream
functions falling into one of the groups "formatted output functions"
and "unformatted output functions" calls any stream buffer function
which might call a virtual function other than overflow(). Basically
this is fine but this implies that sputn() (this function would call
the virtual function xsputn()) is never called by any of the standard
output functions. Is this really intended? At minimum it would be convenient to
call xsputn() for strings... Also, the statement that overflow()
is the only virtual member of basic_streambuf called is in conflict
with the definition of flush() which calls rdbuf()->pubsync()
and thereby the virtual function sync() (flush() is listed
under "unformatted output functions").

In addition, I guess that the sentence starting with "They may use other
public members of basic_ostream ..." probably was intended to
start with "They may use other public members of basic_streamuf..."
although the problem with the virtual members exists in both cases.

I see two obvious resolutions:

state in a footnote that this means that xsputn() will never be
called by any ostream member and that this is intended.

relax the restriction and allow calling overflow() and xsputn().
Of course, the problem with flush() has to be resolved in some way.

Proposed resolution:

Change the last sentence of 27.6.2.1 (lib.ostream) paragraph 2 from:

They may use other public members of basic_ostream except that they do not
invoke any virtual members of rdbuf() except overflow().

to:

They may use other public members of basic_ostream except that they shall
not invoke any virtual members of rdbuf() except overflow(), xsputn(), and
sync().

[Kona: the LWG believes this is a problem. Wish to ask Jerry or
PJP why the standard is written this way.]

[Post-Tokyo: Dietmar supplied wording at the request of the
LWG. He comments: The rules can be made a little bit more specific if
necessary be explicitly spelling out what virtuals are allowed to be
called from what functions and eg to state specifically that flush()
is allowed to call sync() while other functions are not.]

Paragraph 4 states that the length is determined using
traits::length(s). Unfortunately, this function is not
defined for example if the character type is wchar_t and the
type of s is char const*. Similar problems exist if
the character type is char and the type of s is
either signed char const* or unsigned char
const*.

Proposed resolution:

Change 27.7.3.6.4 [ostream.inserters.character] paragraph 4 from:

Effects: Behaves like an formatted inserter (as described in
lib.ostream.formatted.reqmts) of out. After a sentry object is
constructed it inserts characters. The number of characters starting
at s to be inserted is traits::length(s). Padding is determined as
described in lib.facet.num.put.virtuals. The traits::length(s)
characters starting at s are widened using out.widen
(lib.basic.ios.members). The widened characters and any required
padding are inserted into out. Calls width(0).

to:

Effects: Behaves like a formatted inserter (as described in
lib.ostream.formatted.reqmts) of out. After a sentry object is
constructed it inserts n characters starting at s,
where n is the number that would be computed as if by:

traits::length(s) for the overload where the first argument is of
type basic_ostream<charT, traits>& and the second is
of type const charT*, and also for the overload where the first
argument is of type basic_ostream<char, traits>& and
the second is of type const char*.

std::char_traits<char>::length(s)
for the overload where the first argument is of type
basic_ostream<charT, traits>& and the second is of type
const char*.

traits::length(reinterpret_cast<const char*>(s))
for the other two overloads.

Padding is determined as described in
lib.facet.num.put.virtuals. The n characters starting at
s are widened using out.widen (lib.basic.ios.members). The
widened characters and any required padding are inserted into
out. Calls width(0).

[Santa Cruz: Matt supplied new wording]

[Kona: changed "where n is" to " where n is the
number that would be computed as if by"]

Rationale:

We have five separate cases. In two of them we can use the
user-supplied traits class without any fuss. In the other three we
try to use something as close to that user-supplied class as possible.
In two cases we've got a traits class that's appropriate for
char and what we've got is a const signed char* or a const
unsigned char*; that's close enough so we can just use a reinterpret
cast, and continue to use the user-supplied traits class. Finally,
there's one case where we just have to give up: where we've got a
traits class for some arbitrary charT type, and we somehow have to
deal with a const char*. There's nothing better to do but fall back
to char_traits<char>

Paragraph 8, Notes, of this section seems to mandate an extremely
inefficient way of buffer handling for basic_stringbuf,
especially in view of the restriction that basic_ostream
member functions are not allowed to use xsputn() (see 27.7.3.1 [ostream]): For each character to be inserted, a new buffer
is to be created.

Of course, the resolution below requires some handling of
simultaneous input and output since it is no longer possible to update
egptr() whenever epptr() is changed. A possible
solution is to handle this in underflow().

Proposed resolution:

In 27.8.2.4 [stringbuf.virtuals] paragraph 8, Notes, insert the words
"at least" as in the following:

To make a write position available, the function reallocates (or initially
allocates) an array object with a sufficient number of elements to hold the
current array object (if any), plus at least one additional write
position.

170. Inconsistent definition of traits_type

The classes basic_stringstream (27.8.5 [stringstream]),
basic_istringstream (27.8.3 [istringstream]), and
basic_ostringstream (27.8.4 [ostringstream]) are inconsistent
in their definition of the type traits_type: For
istringstream, this type is defined, for the other two it is
not. This should be consistent.

Proposed resolution:

Proposed resolution:

To the declarations of
basic_ostringstream (27.8.4 [ostringstream]) and
basic_stringstream (27.8.5 [stringstream]) add:

In 27.9.1.1 [filebuf] paragraph 3, it is stated that a joint input and
output position is maintained by basic_filebuf. Still, the
description of seekpos() seems to talk about different file
positions. In particular, it is unclear (at least to me) what is
supposed to happen to the output buffer (if there is one) if only the
input position is changed. The standard seems to mandate that the
output buffer is kept and processed as if there was no positioning of
the output position (by changing the input position). Of course, this
can be exactly what you want if the flag ios_base::ate is
set. However, I think, the standard should say something like
this:

If (which & mode) == 0 neither read nor write position is
changed and the call fails. Otherwise, the joint read and write position is
altered to correspond to sp.

If there is an output buffer, the output sequences is updated and any
unshift sequence is written before the position is altered.

If there is an input buffer, the input sequence is updated after the
position is altered.

In 27.7.2.1 [istream] the function
ignore() gets an object of type streamsize as first
argument. However, in 27.7.2.3 [istream.unformatted]
paragraph 23 the first argument is of type int.

As far as I can see this is not really a contradiction because
everything is consistent if streamsize is typedef to be
int. However, this is almost certainly not what was
intended. The same thing happened to basic_filebuf::setbuf(),
as described in issue 173.

Darin Adler also
submitted this issue, commenting: Either 27.6.1.1 should be modified
to show a first parameter of type int, or 27.6.1.3 should be modified
to show a first parameter of type streamsize and use
numeric_limits<streamsize>::max.

Proposed resolution:

In 27.7.2.3 [istream.unformatted] paragraph 23 and 24, change both uses
of int in the description of ignore() to
streamsize.

In 27.9.1.1 [filebuf] the function setbuf() gets an
object of type streamsize as second argument. However, in
27.9.1.5 [filebuf.virtuals] paragraph 9 the second argument is of type
int.

As far as I can see this is not really a contradiction because
everything is consistent if streamsize is typedef to be
int. However, this is almost certainly not what was
intended. The same thing happened to basic_istream::ignore(),
as described in issue 172.

Proposed resolution:

In 27.9.1.5 [filebuf.virtuals] paragraph 9, change all uses of
int in the description of setbuf() to
streamsize.

175. Ambiguity for basic_streambuf::pubseekpos() and a few other functions.

According to paragraph 8 of this section, the methods
basic_streambuf::pubseekpos(),
basic_ifstream::open(), and basic_ofstream::open
"may" be overloaded by a version of this function taking the
type ios_base::open_mode as last argument argument instead of
ios_base::openmode (ios_base::open_mode is defined
in this section to be an alias for one of the integral types). The
clause specifies, that the last argument has a default argument in
three cases. However, this generates an ambiguity with the overloaded
version because now the arguments are absolutely identical if the last
argument is not specified.

176. exceptions() in ios_base...?

The "overload" for the function exceptions() in
paragraph 8 gives the impression that there is another function of
this function defined in class ios_base. However, this is not
the case. Thus, it is hard to tell how the semantics (paragraph 9) can
be implemented: "Call the corresponding member function specified
in clause 27 [input.output]."

The reason this doesn't compile is because operator== was implemented
as a member function of the nested classes set:iterator and
set::const_iterator, and there is no conversion from const_iterator to
iterator. Surprisingly, (s.end() == i) does work, though, because of
the conversion from iterator to const_iterator.

I don't see a requirement anywhere in the standard that this must
work. Should there be one? If so, I think the requirement would need
to be added to the tables in section 24.1.1. I'm not sure about the
wording. If this requirement existed in the standard, I would think
that implementors would have to make the comparison operators
non-member functions.

This issues was also raised on comp.std.c++ by Darin
Adler. The example given was:

Where i and j denote objects of a container's iterator type,
either or both may be replaced by an object of the container's
const_iterator type referring to the same element with no
change in semantics.

[post-Toronto: Judy supplied a proposed resolution saying that
iterator and const_iterator could be freely mixed in
iterator comparison and difference operations.]

[Redmond: Dave and Howard supplied a new proposed resolution which
explicitly listed expressions; there was concern that the previous
proposed resolution was too informal.]

Rationale:

The LWG believes it is clear that the above wording applies only to
the nested types X::iterator and X::const_iterator,
where X is a container. There is no requirement that
X::reverse_iterator and X::const_reverse_iterator
can be mixed. If mixing them is considered important, that's a
separate issue. (Issue 280.)

It is the constness of the container which should control whether
it can be modified through a member function such as erase(), not the
constness of the iterators. The iterators only serve to give
positioning information.

Here's a simple and typical example problem which is currently very
difficult or impossible to solve without the change proposed
below.

Wrap a standard container C in a class W which allows clients to
find and read (but not modify) a subrange of (C.begin(), C.end()]. The
only modification clients are allowed to make to elements in this
subrange is to erase them from C through the use of a member function
of W.

The issue was discussed at length. It was generally agreed that 1)
There is no major technical argument against the change (although
there is a minor argument that some obscure programs may break), and
2) Such a change would not break const correctness. The concerns about
making the change were 1) it is user detectable (although only in
boundary cases), 2) it changes a large number of signatures, and 3) it
seems more of a design issue that an out-and-out defect.

The LWG believes that this issue should be considered as part of a
general review of const issues for the next revision of the
standard. Also see issue 200.

According to 12.8 [class.copy], an implementation is permitted
to not perform a copy of an argument, thus avoiding unnecessary
copies.

Rationale:

Two potential fixes were suggested by Matt Austern and Dietmar
Kühl, respectively, 1) overloading with array arguments, and 2) use of
a reference_traits class with a specialization for arrays. Andy
Koenig suggested changing to pass by value. In discussion, it appeared
that this was a much smaller change to the standard that the other two
suggestions, and any efficiency concerns were more than offset by the
advantages of the solution. Two implementors reported that the
proposed resolution passed their test suites.

The typedef members pointer, const_pointer, size_type, and difference_type
are required to be T*, T const*, size_t, and ptrdiff_t, respectively.

by:

The typedef members pointer, const_pointer, size_type, and difference_type
are required to be T*, T const*, std::size_t, and std::ptrdiff_t,
respectively.

In [lib.allocator.members] 20.4.1.1, paragraphs 3 and 6: replace:

3 Notes: Uses ::operator new(size_t) (18.4.1).

6 Note: the storage is obtained by calling ::operator new(size_t), but it
is unspecified when or how often this function is called. The use of hint is
unspecified, but intended as an aid to locality if an implementation so
desires.

by:

3 Notes: Uses ::operator new(std::size_t) (18.4.1).

6 Note: the storage is obtained by calling ::operator new(std::size_t), but
it is unspecified when or how often this function is called. The use of hint
is unspecified, but intended as an aid to locality if an implementation so
desires.

In [lib.char.traits.require] 21.1.1, paragraph 1: replace:

In Table 37, X denotes a Traits class defining types and functions for the
character container type CharT; c and d denote values of type CharT; p and q
denote values of type const CharT*; s denotes a value of type CharT*; n, i and
j denote values of type size_t; e and f denote values of type X::int_type; pos
denotes a value of type X::pos_type; and state denotes a value of type X::state_type.

by:

In Table 37, X denotes a Traits class defining types and functions for the
character container type CharT; c and d denote values of type CharT; p and q
denote values of type const CharT*; s denotes a value of type CharT*; n, i and
j denote values of type std::size_t; e and f denote values of type X::int_type;
pos denotes a value of type X::pos_type; and state denotes a value of type X::state_type.

The LWG believes correcting names like size_t and
ptrdiff_t to std::size_t and std::ptrdiff_t
to be essentially editorial. There there can't be another size_t or
ptrdiff_t meant anyway because, according to 17.6.4.3.4 [extern.types],

For each type T from the Standard C library, the types ::T and std::T
are reserved to the implementation and, when defined, ::T shall be
identical to std::T.

The issue is treated as a Defect Report to make explicit the Project
Editor's authority to make this change.

[Post-Tokyo: Nico Josuttis provided the above wording at the
request of the LWG.]

[Toronto: This is tangentially related to issue 229, but only tangentially: the intent of this issue is to
address use of the name size_t in contexts outside of
namespace std, such as in the description of ::operator new.
The proposed changes should be reviewed to make sure they are
correct.]

[pre-Copenhagen: Nico has reviewed the changes and believes
them to be correct.]

Returns: An object s of unspecified type such that if [1] out is an (instance
of) basic_ostream then the expression out<<s behaves as if f(s) were
called, and if [2] in is an (instance of) basic_istream then the expression
in>>s behaves as if f(s) were called. Where f can be defined as: ios_base&
f(ios_base& str, ios_base::fmtflags mask) { // reset specified flags
str.setf(ios_base::fmtflags(0), mask); return str; } [3] The expression
out<<s has type ostream& and value out. [4] The expression in>>s
has type istream& and value in.

Given the definitions [1] and [2] for out and in, surely [3] should read:
"The expression out << s has type basic_ostream& ..." and
[4] should read: "The expression in >> s has type basic_istream&
..."

If the wording in the standard is correct, I can see no way of implementing
any of the manipulators so that they will work with wide character streams.

2- The type designated smanip in each of the following function descriptions is implementation-specified and may be different for each
function.

smanip resetiosflags(ios_base::fmtflags mask);

-3- Returns: An object s of unspecified type such that if out is an instance of basic_ostream<charT,traits> then the expression out<<s behaves
as if f(s, mask) were called, or if in is an instance of basic_istream<charT,traits> then the expression in>>s behaves as if
f(s, mask) were called. The function f can be defined as:*

[Footnote: The expression cin >> resetiosflags(ios_base::skipws) clears ios_base::skipws in the format flags stored in the
basic_istream<charT,traits> object cin (the same as cin >> noskipws), and the expression cout << resetiosflags(ios_base::showbase) clears
ios_base::showbase in the format flags stored in the basic_ostream<charT,traits> object cout (the same as cout <<
noshowbase). --- end footnote]

The expression out<<s has type basic_ostream<charT,traits>& and value out.
The expression in>>s has type basic_istream<charT,traits>& and value in.

smanip setiosflags(ios_base::fmtflags mask);

-4- Returns: An object s of unspecified type such that if out is an instance of basic_ostream<charT,traits> then the expression out<<s behaves
as if f(s, mask) were called, or if in is an instance of basic_istream<charT,traits> then the expression in>>s behaves as if f(s,
mask) were called. The function f can be defined as:

The expression out<<s has type basic_ostream<charT,traits>& and value out.
The expression in>>s has type basic_istream<charT,traits>& and value in.

smanip setbase(int base);

-5- Returns: An object s of unspecified type such that if out is an instance of basic_ostream<charT,traits> then the expression out<<s behaves
as if f(s, base) were called, or if in is an instance of basic_istream<charT,traits> then the expression in>>s behaves as if f(s,
base) were called. The function f can be defined as:

The expression out<<s has type basic_ostream<charT,traits>& and value out.
The expression in>>s has type basic_istream<charT,traits>& and value in.

smanip setfill(char_type c);

-6- Returns: An object s of unspecified type such that if out is (or is derived from) basic_ostream<charT,traits> and c has type charT then the
expression out<<s behaves as if f(s, c) were called. The function f can be
defined as:

The expression out<<s has type basic_ostream<charT,traits>& and value out.

smanip setprecision(int n);

-7- Returns: An object s of unspecified type such that if out is an instance of basic_ostream<charT,traits> then the expression out<<s behaves
as if f(s, n) were called, or if in is an instance of basic_istream<charT,traits> then the expression in>>s behaves as if f(s, n)
were called. The function f can be defined as:

The expression out<<s has type basic_ostream<charT,traits>& and value out.
The expression in>>s has type basic_istream<charT,traits>& and value in
.smanip setw(int n);

-8- Returns: An object s of unspecified type such that if out is an instance of basic_ostream<charT,traits> then the expression out<<s behaves
as if f(s, n) were called, or if in is an instance of basic_istream<charT,traits> then the expression in>>s behaves as if f(s, n)
were called. The function f can be defined as:

[Post-Tokyo: The issues list maintainer combined the proposed
resolution of this issue with the proposed resolution for issue 216 as they both involved the same paragraphs, and were so
intertwined that dealing with them separately appear fraught with
error. The full text was supplied by Bill Plauger; it was cross
checked against changes supplied by Andy Sawyer. It should be further
checked by the LWG.]

bools are defined by the standard to be of integer types, as per
3.9.1 [basic.fundamental] paragraph 7. However "integer types"
seems to have a special meaning for the author of 18.2. The net effect
is an unclear and confusing specification for
numeric_limits<bool> as evidenced below.

18.2.1.2/7 says numeric_limits<>::digits is, for built-in integer
types, the number of non-sign bits in the representation.

4.5/4 states that a bool promotes to int ; whereas 4.12/1 says any non zero
arithmetical value converts to true.

I don't think it makes sense at all to require
numeric_limits<bool>::digits and numeric_limits<bool>::digits10 to
be meaningful.

The standard defines what constitutes a signed (resp. unsigned) integer
types. It doesn't categorize bool as being signed or unsigned. And the set of
values of bool type has only two elements.

I don't think it makes sense to require numeric_limits<bool>::is_signed
to be meaningful.

18.2.1.2/18 for numeric_limits<integer_type>::radix says:

For integer types, specifies the base of the representation.186)

This disposition is at best misleading and confusing for the standard
requires a "pure binary numeration system" for integer types as per
3.9.1/7

The footnote 186) says: "Distinguishes types with base other than 2 (e.g
BCD)." This also erroneous as the standard never defines any integer
types with base representation other than 2.

Furthermore, numeric_limits<bool>::is_modulo and
numeric_limits<bool>::is_signed have similar problems.

186. bitset::set() second parameter should be bool

In section 20.5.2 [bitset.members], paragraph 13 defines the
bitset::set operation to take a second parameter of type int. The
function tests whether this value is non-zero to determine whether to
set the bit to true or false. The type of this second parameter should
be bool. For one thing, the intent is to specify a Boolean value. For
another, the result type from test() is bool. In addition, it's
possible to slice an integer that's larger than an int. This can't
happen with bool, since conversion to bool has the semantic of
translating 0 to false and any non-zero value to true.

Proposed resolution:

In 20.5 [template.bitset] Para 1 Replace:

bitset<N>& set(size_t pos, int val = true );

With:

bitset<N>& set(size_t pos, bool val = true );

In 20.5.2 [bitset.members] Para 12(.5) Replace:

bitset<N>& set(size_t pos, int val = 1 );

With:

bitset<N>& set(size_t pos, bool val = true );

[Kona: The LWG agrees with the description. Andy Sawyer will work
on better P/R wording.]

[Post-Tokyo: Andy provided the above wording.]

Rationale:

bool is a better choice. It is believed that binary
compatibility is not an issue, because this member function is
usually implemented as inline, and because it is already
the case that users cannot rely on the type of a pointer to a
nonvirtual member of a standard library class.

187. iter_swap underspecified

The description of iter_swap in 25.2.2 paragraph 7,says that it
``exchanges the values'' of the objects to which two iterators
refer.

What it doesn't say is whether it does so using swap
or using the assignment operator and copy constructor.

This
question is an important one to answer, because swap is specialized to
work efficiently for standard containers. For example:

vector<int> v1, v2;
iter_swap(&v1, &v2);

Is this call to iter_swap equivalent to calling swap(v1, v2)?
Or is it equivalent to

{
vector<int> temp = v1;
v1 = v2;
v2 = temp;
}

The first alternative is O(1); the second is O(n).

A LWG member, Dave Abrahams, comments:

Not an objection necessarily, but I want to point out the cost of
that requirement:

iter_swap(list<T>::iterator, list<T>::iterator)

can currently be specialized to be more efficient than
iter_swap(T*,T*) for many T (by using splicing). Your proposal would
make that optimization illegal.

[Kona: The LWG notes the original need for iter_swap was proxy iterators
which are no longer permitted.]

Proposed resolution:

Change the effect clause of iter_swap in 25.2.2 paragraph 7 from:

Exchanges the values pointed to by the two iterators a and b.

to

swap(*a, *b).

Rationale:

It's useful to say just what iter_swap does. There may be
some iterators for which we want to specialize iter_swap,
but the fully general version should have a general specification.

Note that in the specific case of list<T>::iterator,
iter_swap should not be specialized as suggested above. That would do
much more than exchanging the two iterators' values: it would change
predecessor/successor relationships, possibly moving the iterator from
one list to another. That would surely be inappropriate.

189. setprecision() not specified correctly

27.4.2.2 paragraph 9 claims that setprecision() sets the precision,
and includes a parenthetical note saying that it is the number of
digits after the decimal point.

This claim is not strictly correct. For example, in the default
floating-point output format, setprecision sets the number of
significant digits printed, not the number of digits after the decimal
point.

I would like the committee to look at the definition carefully and
correct the statement in 27.4.2.2

Proposed resolution:

Remove from 27.5.3.2 [fmtflags.state], paragraph 9, the text
"(number of digits after the decimal point)".

25.3.6 [lib.alg.heap.operations] states two key properties of a heap [a,b), the first of them
is

`"(1) *a is the largest element"

I think this is incorrect and should be changed to the wording in the proposed
resolution.

Actually there are two independent changes:

A-"part of largest equivalence class" instead of "largest", cause 25.3
[lib.alg.sorting] asserts "strict weak ordering" for all its sub clauses.

B-Take 'an oldest' from that equivalence class, otherwise the heap functions could not be used for a
priority queue as explained in 23.2.3.2.2 [lib.priqueue.members] (where I assume that a "priority queue" respects priority AND time).

Proposed resolution:

Change 25.4.6 [alg.heap.operations] property (1) from:

(1) *a is the largest element

to:

(1) There is no element greater than *a

195. Should basic_istream::sentry's constructor ever set eofbit?

Suppose that is.flags() & ios_base::skipws is nonzero.
What should basic_istream<>::sentry's constructor do if it
reaches eof while skipping whitespace? 27.6.1.1.2/5 suggests it
should set failbit. Should it set eofbit as well? The standard
doesn't seem to answer that question.

On the one hand, nothing in 27.7.2.1.3 [istream::sentry] says that
basic_istream<>::sentry should ever set eofbit. On the
other hand, 27.7.2.1 [istream] paragraph 4 says that if
extraction from a streambuf "returns
traits::eof(), then the input function, except as explicitly
noted otherwise, completes its actions and does
setstate(eofbit)". So the question comes down to
whether basic_istream<>::sentry's constructor is an
input function.

Comments from Jerry Schwarz:

It was always my intention that eofbit should be set any time that a
virtual returned something to indicate eof, no matter what reason
iostream code had for calling the virtual.

The motivation for this is that I did not want to require streambufs
to behave consistently if their virtuals are called after they have
signaled eof.

The classic case is a streambuf reading from a UNIX file. EOF isn't
really a state for UNIX file descriptors. The convention is that a
read on UNIX returns 0 bytes to indicate "EOF", but the file
descriptor isn't shut down in any way and future reads do not
necessarily also return 0 bytes. In particular, you can read from
tty's on UNIX even after they have signaled "EOF". (It
isn't always understood that a ^D on UNIX is not an EOF indicator, but
an EOL indicator. By typing a "line" consisting solely of
^D you cause a read to return 0 bytes, and by convention this is
interpreted as end of file.)

The standard doesn't appear to directly address these
questions. The standard needs to be clarified. At least two real-world
cases have been reported where library implementors wasted
considerable effort because of the lack of clarity in the
standard. The question is important because requiring pointers and
references to remain valid has the effect for practical purposes of
prohibiting iterators from pointing to cached rather than actual
elements of containers.

The standard itself assumes that pointers and references obtained
from an iterator are still valid after iterator destruction or
change. The definition of reverse_iterator::operator*(), 24.5.1.3.3 [reverse.iter.conv], which returns a reference, defines
effects:

Iterator tmp = current;
return *--tmp;

The definition of reverse_iterator::operator->(), 24.5.1.3.4 [reverse.iter.op.star], which returns a pointer, defines effects:

return &(operator*());

Because the standard itself assumes pointers and references remain
valid after iterator destruction or change, the standard should say so
explicitly. This will also reduce the chance of user code breaking
unexpectedly when porting to a different standard library
implementation.

Proposed resolution:

Add a new paragraph to X [iterator.concepts]:

Destruction of an iterator may invalidate pointers and references
previously obtained from that iterator.

Replace paragraph 1 of 24.5.1.3.3 [reverse.iter.conv] with:

Effects:

this->tmp = current;
--this->tmp;
return *this->tmp;

[Note: This operation must use an auxiliary member variable,
rather than a temporary variable, to avoid returning a reference that
persists beyond the lifetime of its associated iterator. (See
X [iterator.concepts].) The name of this member variable is shown for
exposition only. --end note]

[Post-Tokyo: The issue has been reformulated purely
in terms of iterators.]

[Pre-Toronto: Steve Cleary pointed out the no-invalidation
assumption by reverse_iterator. The issue and proposed resolution was
reformulated yet again to reflect this reality.]

[Copenhagen: Steve Cleary pointed out that reverse_iterator
assumes its underlying iterator has persistent pointers and
references. Andy Koenig pointed out that it is possible to rewrite
reverse_iterator so that it no longer makes such an assupmption.
However, this issue is related to issue 299. If we
decide it is intentional that p[n] may return by value
instead of reference when p is a Random Access Iterator,
other changes in reverse_iterator will be necessary.]

Rationale:

This issue has been discussed extensively. Note that it is
not an issue about the behavior of predefined iterators. It is
asking whether or not user-defined iterators are permitted to have
transient pointers and references. Several people presented examples
of useful user-defined iterators that have such a property; examples
include a B-tree iterator, and an "iota iterator" that doesn't point
to memory. Library implementors already seem to be able to cope with
such iterators: they take pains to avoid forming references to memory
that gets iterated past. The only place where this is a problem is
reverse_iterator, so this issue changes
reverse_iterator to make it work.

This resolution does not weaken any guarantees provided by
predefined iterators like list<int>::iterator.
Clause 23 should be reviewed to make sure that guarantees for
predefined iterators are as strong as users expect.

Suppose that A is a class that conforms to the
Allocator requirements of Table 32, and a is an
object of class A What should be the return
value of a.allocate(0)? Three reasonable
possibilities: forbid the argument 0, return
a null pointer, or require that the return value be a
unique non-null pointer.

Proposed resolution:

Add a note to the allocate row of Table 32:
"[Note: If n == 0, the return value is unspecified. --end note]"

Rationale:

A key to understanding this issue is that the ultimate use of
allocate() is to construct an iterator, and that iterator for zero
length sequences must be the container's past-the-end
representation. Since this already implies special case code, it
would be over-specification to mandate the return value.

200. Forward iterator requirements don't allow constant iterators

In table 74, the return type of the expression *a is given
as T&, where T is the iterator's value type.
For constant iterators, however, this is wrong. ("Value type"
is never defined very precisely, but it is clear that the value type
of, say, std::list<int>::const_iterator is supposed to be
int, not const int.)

Proposed resolution:

In table 74, in the *a and *r++ rows, change the
return type from "T&" to "T&
if X is mutable, otherwise const T&".
In the a->m row, change the return type from
"U&" to "U& if X is mutable,
otherwise const U&".

[Tokyo: The LWG believes this is the tip of a larger iceberg;
there are multiple const problems with the STL portion of the library
and that these should be addressed as a single package. Note
that issue 180 has already been declared NAD Future for
that very reason.]

[Redmond: the LWG thinks this is separable from other constness
issues. This issue is just cleanup; it clarifies language that was
written before we had iterator_traits. Proposed resolution was
modified: the original version only discussed *a. It was pointed out
that we also need to worry about *r++ and a->m.]

201. Numeric limits terminology wrong

In some places in this section, the terms "fundamental types" and
"scalar types" are used when the term "arithmetic types" is intended.
The current usage is incorrect because void is a fundamental type and
pointers are scalar types, neither of which should have
specializations of numeric_limits.

[Lillehammer: it remains true that numeric_limits is using
imprecise language. However, none of the proposals for changed
wording are clearer. A redesign of numeric_limits is needed, but this
is more a task than an open issue.]

-1- The numeric_limits component provides a C++ program with
information about various properties of the implementation's
representation of the fundamentalarithmetic
types.

-2- Specializations shall be provided for each fundamentalarithmetic type, both floating point and integer, including
bool. The member is_specialized shall be true
for all such specializations of numeric_limits.

-4- Non-fundamentalarithmetic standard types, such
as complex<T> (26.3.2), shall not have specializations.

Change 18.3.2.3 [numeric.limits] to:

-1- The member is_specialized makes it possible to distinguish
between fundamental types, which have specializations, and non-scalar types,
which do not.

202. unique() effects unclear when predicate not an equivalence relation

What should unique() do if you give it a predicate that is not an
equivalence relation? There are at least two plausible answers:

1. You can't, because 25.2.8 says that it it "eliminates all but
the first element from every consecutive group of equal
elements..." and it wouldn't make sense to interpret "equal" as
meaning anything but an equivalence relation. [It also doesn't
make sense to interpret "equal" as meaning ==, because then there
would never be any sense in giving a predicate as an argument at
all.]

2. The word "equal" should be interpreted to mean whatever the
predicate says, even if it is not an equivalence relation
(and in particular, even if it is not transitive).

The example that raised this question is from Usenet:

int f[] = { 1, 3, 7, 1, 2 };
int* z = unique(f, f+5, greater<int>());

If one blindly applies the definition using the predicate
greater<int>, and ignore the word "equal", you get:

Eliminates all but the first element from every consecutive group
of elements referred to by the iterator i in the range [first, last)
for which *i > *(i - 1).

The first surprise is the order of the comparison. If we wanted to
allow for the predicate not being an equivalence relation, then we
should surely compare elements the other way: pred(*(i - 1), *i). If
we do that, then the description would seem to say: "Break the
sequence into subsequences whose elements are in strictly increasing
order, and keep only the first element of each subsequence". So the
result would be 1, 1, 2. If we take the description at its word, it
would seem to call for strictly DEcreasing order, in which case the
result should be 1, 3, 7, 2.

In fact, the SGI implementation of unique() does neither: It yields 1,
3, 7.

Proposed resolution:

Change 25.3.9 [alg.unique] paragraph 1 to:

For a nonempty range, eliminates all but the first element from every
consecutive group of equivalent elements referred to by the iterator
i in the range [first+1, last) for which the following
conditions hold: *(i-1) == *i or pred(*(i-1), *i) !=
false.

Also insert a new paragraph, paragraph 2a, that reads: "Requires: The
comparison function must be an equivalence relation."

[Redmond: discussed arguments for and against requiring the
comparison function to be an equivalence relation. Straw poll:
14-2-5. First number is to require that it be an equivalence
relation, second number is to explicitly not require that it be an
equivalence relation, third number is people who believe they need
more time to consider the issue. A separate issue: Andy Sawyer
pointed out that "i-1" is incorrect, since "i" can refer to the first
iterator in the range. Matt provided wording to address this
problem.]

[Curaçao: The LWG changed "... the range (first,
last)..." to "... the range [first+1, last)..." for
clarity. They considered this change close enough to editorial to not
require another round of review.]

Rationale:

The LWG also considered an alternative resolution: change
25.3.9 [alg.unique] paragraph 1 to:

For a nonempty range, eliminates all but the first element from every
consecutive group of elements referred to by the iterator
i in the range (first, last) for which the following
conditions hold: *(i-1) == *i or pred(*(i-1), *i) !=
false.

Also insert a new paragraph, paragraph 1a, that reads: "Notes: The
comparison function need not be an equivalence relation."

Informally: the proposed resolution imposes an explicit requirement
that the comparison function be an equivalence relation. The
alternative resolution does not, and it gives enough information so
that the behavior of unique() for a non-equivalence relation is
specified. Both resolutions are consistent with the behavior of
existing implementations.

206. operator new(size_t, nothrow) may become unlinked to ordinary operator new if ordinary version replaced

As specified, the implementation of the nothrow version of operator
new does not necessarily call the ordinary operator new, but may
instead simply call the same underlying allocator and return a null
pointer instead of throwing an exception in case of failure.

Such an implementation breaks code that replaces the ordinary
version of new, but not the nothrow version. If the ordinary version
of new/delete is replaced, and if the replaced delete is not
compatible with pointers returned from the library versions of new,
then when the replaced delete receives a pointer allocated by the
library new(nothrow), crash follows.

The fix appears to be that the lib version of new(nothrow) must
call the ordinary new. Thus when the ordinary new gets replaced, the
lib version will call the replaced ordinary new and things will
continue to work.

An alternative would be to have the ordinary new call
new(nothrow). This seems sub-optimal to me as the ordinary version of
new is the version most commonly replaced in practice. So one would
still need to replace both ordinary and nothrow versions if one wanted
to replace the ordinary version.

Another alternative is to put in clear text that if one version is
replaced, then the other must also be replaced to maintain
compatibility. Then the proposed resolution below would just be a
quality of implementation issue. There is already such text in
paragraph 7 (under the new(nothrow) version). But this nuance is
easily missed if one reads only the paragraphs relating to the
ordinary new.

N2158
has been written explaining the rationale for the proposed resolution below.

Proposed resolution:

Change 18.5.1.1 [new.delete.single]:

void* operator new(std::size_t size, const std::nothrow_t&) throw();

-5- Effects: Same as above, except that it is called by a placement
version of a new-expression when a C++ program prefers a null pointer result as
an error indication, instead of a bad_alloc exception.

-6- Replaceable: a C++ program may define a function with this function
signature that displaces the default version defined by the C++ Standard
library.

-7- Required behavior: Return a non-null pointer to suitably aligned
storage (3.7.4), or else return a null pointer. This nothrow version of operator
new returns a pointer obtained as if acquired from the (possibly
replaced) ordinary version. This requirement is binding on a replacement
version of this function.

-8- Default behavior:

Calls operator new(size).

If the call to operator new(size) returns normally, returns
the result of that call, else

if the call to operator new(size) throws an exception, returns
a null pointer.

Executes a loop: Within the loop, the function first attempts to allocate the
requested storage. Whether the attempt involves a call to the Standard C library
function malloc is unspecified.

Returns a pointer to the allocated storage if the attempt is successful.
Otherwise, if the last argument to set_new_handler() was a null
pointer, return a null pointer.

Otherwise, the function calls the current new_handler (18.5.2.2). If the
called function returns, the loop repeats.

The loop terminates when an attempt to allocate the requested storage is
successful or when a called new_handler function does not return. If the
called new_handler function terminates by throwing a bad_alloc
exception, the function returns a null pointer.

-10- Effects: The deallocation function (3.7.4.2) called by a
delete-expression to render the value of ptr invalid.

-11- Replaceable: a C++ program may define a function with this function
signature that displaces the default version defined by the C++ Standard
library.

-12- Requires: the value of ptr is null or the value
returned by an earlier call to the default(possibly
replaced)operator new(std::size_t) or operator
new(std::size_t, const std::nothrow_t&).

-13- Default behavior:

For a null value of ptr, do nothing.

Any other value of ptr shall be a value returned earlier by a
call to the default operator new, which was not invalidated by an
intervening call to operator delete(void*) (17.4.3.7). For such a
non-null value of ptr, reclaims storage allocated by the earlier
call to the default operator new.

-14- Remarks: It is unspecified under what conditions part or all of
such reclaimed storage is allocated by a subsequent call to operator
new or any of calloc, malloc, or realloc,
declared in <cstdlib>.

void operator delete(void* ptr, const std::nothrow_t&) throw();

-15- Effects: Same as above, except that it is called by the
implementation when an exception propagates from a nothrow placement version
of the new-expression (i.e. when the constructor throws an exception).

-16- Replaceable: a C++ program may define a function with this function
signature that displaces the default version defined by the C++ Standard
library.

-17- Requires: the value of ptr is null or the
value returned by an earlier call to the (possibly replaced) operator
new(std::size_t) or operator new(std::size_t, const
std::nothrow_t&).

-5- Effects: Same as above, except that it is called by a placement
version of a new-expression when a C++ program prefers a null pointer result as
an error indication, instead of a bad_alloc exception.

-6- Replaceable: a C++ program can define a function with this function
signature that displaces the default version defined by the C++ Standard
library.

-7- Required behavior:Same as for operator new(std::size_t,
const std::nothrow_t&). This nothrow version of operator new[]
returns a pointer obtained as if acquired from the ordinary version.Return a non-null pointer to suitably aligned storage (3.7.4), or else
return a null pointer. This nothrow version of operator new returns a pointer
obtained as if acquired from the (possibly replaced) operator
new[](std::size_t size). This requirement is binding on a
replacement version of this function.

-8- Default behavior:Returns operator new(size,
nothrow).

Calls operator new[](size).

If the call to operator new[](size) returns normally, returns
the result of that call, else

if the call to operator new[](size) throws an exception, returns
a null pointer.

Yes, they may become unlinked, and that is by design. If a user
replaces one, the user should also replace the other.

[
Reopened due to a gcc conversation between Howard, Martin and Gaby. Forwarding
or not is visible behavior to the client and it would be useful for the client
to know which behavior it could depend on.
]

[
Batavia: Robert voiced serious reservations about backwards compatibility for
his customers.
]

This places an unnecessary restriction on past-the-end iterators for
containers with forward iterators (for example, a singly-linked list). If the
past-the-end value on such a container was a well-known singular value, it would
still satisfy all forward iterator requirements.

Removing this restriction would allow, for example, a singly-linked list
without a "footer" node.

This would have an impact on existing code that expects past-the-end
iterators obtained from different (generic) containers being not equal.

Proposed resolution:

Change X [iterator.concepts] paragraph 5, the last sentence, from:

Dereferenceable and past-the-end values are always non-singular.

to:

Dereferenceable values are always non-singular.

Rationale:

For some kinds of containers, including singly linked lists and
zero-length vectors, null pointers are perfectly reasonable past-the-end
iterators. Null pointers are singular.

Although the standard is in general not consistent in declaration
style, the basic_string declarations are consistent other than the
above. The LWG felt that this was sufficient reason to merit the
change.

In the description of the algorithms operators + and - are used
for some of the iterator categories for which they do not have to
be defined. In these cases the semantics of [...] a-b is the same
as of

return distance(a, b);

Proposed resolution:

On the last line of paragraph 9 of section 25 [algorithms] change
"a-b" to "b-a".

Rationale:

There are two ways to fix the defect; change the description to b-a
or change the return to distance(b,a). The LWG preferred the
former for consistency.

211. operator>>(istream&, string&) doesn't set failbit

The description of the stream extraction operator for std::string (section
21.3.7.9 [lib.string.io]) does not contain a requirement that failbit be set in
the case that the operator fails to extract any characters from the input
stream.

This implies that the typical construction

std::istream is;
std::string str;
...
while (is >> str) ... ;

(which tests failbit) is not required to terminate at EOF.

Furthermore, this is inconsistent with other extraction operators,
which do include this requirement. (See sections 27.7.2.2 [istream.formatted] and 27.7.2.3 [istream.unformatted]), where this
requirement is present, either explicitly or implicitly, for the
extraction operators. It is also present explicitly in the description
of getline (istream&, string&, charT) in section 21.4.8.9 [string.io] paragraph 8.)

Proposed resolution:

Insert new paragraph after paragraph 2 in section 21.4.8.9 [string.io]:

If the function extracts no characters, it calls
is.setstate(ios::failbit) which may throw ios_base::failure
(27.4.4.3).

212. Empty range behavior unclear for several algorithms

The standard doesn't specify what min_element() and max_element() shall
return if the range is empty (first equals last). The usual implementations
return last. This problem seems also apply to partition(), stable_partition(),
next_permutation(), and prev_permutation().

Proposed resolution:

In 25.4.7 [alg.min.max] - Minimum and maximum, paragraphs 7 and
9, append: Returns last if first==last.

Rationale:

The LWG looked in some detail at all of the above mentioned
algorithms, but believes that except for min_element() and
max_element() it is already clear that last is returned if first ==
last.

The specification for the associative container requirements in
Table 69 state that the find member function should "return
iterator; const_iterator for constant a". The map and multimap
container descriptions have two overloaded versions of find, but set
and multiset do not, all they have is:

iterator find(const key_type & x) const;

Proposed resolution:

Change the prototypes for find(), lower_bound(), upper_bound(), and
equal_range() in section 23.4.6 [set] and section 23.4.7 [multiset] to each have two overloads:

1) The member function `My::JCtype::is_kanji()' is non-const; the function
must be const in order for it to be callable on a const object (a reference to
which which is what std::use_facet<>() returns).

2) In file filt.C, the definition of `JCtype::id' must be qualified with the
name of the namespace `My'.

3) In the definition of `loc' and subsequently in the call to use_facet<>()
in main(), the name of the facet is misspelled: it should read `My::JCtype'
rather than `My::JCType'.

Proposed resolution:

Replace the "Classifying Japanese characters" example in 22.2.8,
paragraph 11 with the following:

220. ~ios_base() usage valid?

The pre-conditions for the ios_base destructor are described in 27.4.2.7
paragraph 2:

Effects: Destroys an object of class ios_base. Calls each registered
callback pair (fn,index) (27.4.2.6) as (*fn)(erase_event,*this,index) at such
time that any ios_base member function called from within fn has well defined
results.

But what is not clear is: If no callback functions were ever registered, does
it matter whether the ios_base members were ever initialized?

For instance, does this program have defined behavior:

#include <ios>

class D : public std::ios_base { };

int main() { D d; }

It seems that registration of a callback function would surely affect the
state of an ios_base. That is, when you register a callback function with an
ios_base, the ios_base must record that fact somehow.

But if after construction the ios_base is in an indeterminate state, and that
state is not made determinate before the destructor is called, then how would
the destructor know if any callbacks had indeed been registered? And if the
number of callbacks that had been registered is indeterminate, then is not the
behavior of the destructor undefined?

By comparison, the basic_ios class description in 27.4.4.1 paragraph 2 makes
it explicit that destruction before initialization results in undefined
behavior.

Proposed resolution:

Modify 27.4.2.7 paragraph 1 from

Effects: Each ios_base member has an indeterminate value after
construction.

to

Effects: Each ios_base member has an indeterminate value after
construction. These members must be initialized by calling basic_ios::init. If an ios_base object is destroyed before these initializations
have taken place, the behavior is undefined.

Table 55 in 22.2.2.1.2 says that when basefield is 0 the integral
conversion specifier is %i. A %i specifier determines a number's base
by its prefix (0 for octal, 0x for hex), so the intention is clearly
that a 0x prefix is allowed. Paragraph 8 in the same section,
however, describes very precisely how characters are processed. (It
must be done "as if" by a specified code fragment.) That
description does not allow a 0x prefix to be recognized.

Very roughly, stage 2 processing reads a char_type ct. It converts
ct to a char, not by using narrow but by looking it up in a
translation table that was created by widening the string literal
"0123456789abcdefABCDEF+-". The character "x" is
not found in that table, so it can't be recognized by stage 2
processing.

Proposed resolution:

In 22.2.2.1.2 paragraph 8, replace the line:

static const char src[] = "0123456789abcdefABCDEF+-";

with the line:

static const char src[] = "0123456789abcdefxABCDEFX+-";

Rationale:

If we're using the technique of widening a string literal, the
string literal must contain every character we wish to recognize.
This technique has the consequence that alternate representations
of digits will not be recognized. This design decision was made
deliberately, with full knowledge of that limitation.

222. Are throw clauses necessary if a throw is already implied by the effects clause?

and the constructor that's implicitly called by the above is
defined to throw an out-of-range exception if pos > str.size(). See
section 21.4.1 [string.require] paragraph 4.

On the other hand, the compare function descriptions themselves don't have
"Throws: " clauses and according to 17.3.1.3, paragraph 3, elements
that do not apply to a function are omitted.

So it seems there is an inconsistency in the standard -- are the
"Effects" clauses correct, or are the "Throws" clauses
missing?

Proposed resolution:

In 17.5.1.4 [structure.specifications] paragraph 3, the footnote 148 attached to
the sentence "Descriptions of function semantics contain the
following elements (as appropriate):", insert the word
"further" so that the foot note reads:

To save space, items that do not apply to a function are
omitted. For example, if a function does not specify any further
preconditions, there will be no "Requires" paragraph.

Rationale:

The standard is somewhat inconsistent, but a failure to note a
throw condition in a throws clause does not grant permission not to
throw. The inconsistent wording is in a footnote, and thus
non-normative. The proposed resolution from the LWG clarifies the
footnote.

In the associative container requirements table in 23.1.2 paragraph 7,
a.clear() has complexity "log(size()) + N". However, the meaning of N
is not defined.

Proposed resolution:

In the associative container requirements table in 23.1.2 paragraph
7, the complexity of a.clear(), change "log(size()) + N" to
"linear in size()".

Rationale:

It's the "log(size())", not the "N", that is in
error: there's no difference between O(N) and O(N +
log(N)). The text in the standard is probably an incorrect
cut-and-paste from the range version of erase.

Imagine two users on opposite sides of town, each using unique on his own
sequences bounded by my_iterators . User1 looks at his standard library
implementation and says, "I know how to implement a more efficient
unique_copy for my_iterators", and writes:

User2 is shocked to find later that his fully-qualified use of
std::unique(user2::my_iterator, user2::my_iterator, user2::my_iterator) fails to
compile (if he's lucky). Looking in the standard, he sees the following Effects
clause for unique():

Effects: Eliminates all but the first element from every consecutive group
of equal elements referred to by the iterator i in the range [first, last) for
which the following corresponding conditions hold: *i == *(i - 1) or pred(*i,
*(i - 1)) != false

The standard gives user2 absolutely no reason to think he can interfere with
std::unique by defining names in namespace user2. His standard library has been
built with the template export feature, so he is unable to inspect the
implementation. User1 eventually compiles his code with another compiler, and
his version of unique_copy silently stops being called. Eventually, he realizes
that he was depending on an implementation detail of his library and had no
right to expect his unique_copy() to be called portably.

On the face of it, and given above scenario, it may seem obvious that the
implementation of unique() shown is non-conforming because it uses unique_copy()
rather than ::std::unique_copy(). Most standard library implementations,
however, seem to disagree with this notion.

[Tokyo: Steve Adamczyk from
the core working group indicates that "std::" is sufficient;
leading "::" qualification is not required because any namespace
qualification is sufficient to suppress Koenig lookup.]

Proposed resolution:

Add a paragraph and a note at the end of
17.6.5.4 [global.functions]:

Unless otherwise specified, no global or non-member function in the
standard library shall use a function from another namespace which is
found through argument-dependent name lookup (3.4.2 [basic.lookup.argdep]).

[Note: the phrase "unless otherwise specified" is intended to
allow Koenig lookup in cases like that of ostream_iterators:

[Tokyo: The LWG agrees that this is a defect in the standard, but
is as yet unsure if the proposed resolution is the best
solution. Furthermore, the LWG believes that the same problem of
unqualified library names applies to wording in the standard itself,
and has opened issue 229 accordingly. Any resolution of
issue 225 should be coordinated with the resolution of
issue 229.]

[Toronto: The LWG is not sure if this is a defect in the
standard. Most LWG members believe that an implementation of
std::unique like the one quoted in this issue is already
illegal, since, under certain circumstances, its semantics are not
those specified in the standard. The standard's description of
unique does not say that overloading adjacent_find
should have any effect.]

[Curaçao: An LWG-subgroup spent an afternoon working on issues
225, 226, and 229. Their conclusion was that the issues should be
separated into an LWG portion (Howard's paper, N1387=02-0045), and a
EWG portion (Dave will write a proposal). The LWG and EWG had
(separate) discussions of this plan the next day. The proposed
resolution for this issue is in accordance with Howard's paper.]

Rationale:

It could be argued that this proposed isn't strictly necessary,
that the Standard doesn't grant implementors license to write a
standard function that behaves differently than specified in the
Standard just because of an unrelated user-defined name in some
other namespace. However, this is at worst a clarification. It is
surely right that algorithsm shouldn't pick up random names, that
user-defined names should have no effect unless otherwise specified.
Issue 226 deals with the question of when it is
appropriate for the standard to explicitly specify otherwise.

This answer has some drawbacks. First of all, it makes writing lib2 difficult
and somewhat slippery. The implementor needs to remember to write the
using-declaration, or generic_sort will fail to compile when T is a built-in
type. The second drawback is that the use of this style in lib2 effectively
"reserves" names in any namespace which defines types which may
eventually be used with lib2. This may seem innocuous at first when applied to
names like swap, but consider more ambiguous names like unique_copy() instead.
It is easy to imagine the user wanting to define these names differently in his
own namespace. A definition with semantics incompatible with the standard
library could cause serious problems (see issue 225).

Why, you may ask, can't we just partially specialize std::swap()? It's
because the language doesn't allow for partial specialization of function
templates. If you write:

This issue reflects concerns raised by the "Namespace issue
with specialized swap" thread on comp.lang.c++.moderated. A
similar set of concerns was earlier raised on the boost.org mailing
list and the ACCU-general mailing list. Also see library reflector
message c++std-lib-7354.

J. C. van Winkel points out (in c++std-lib-9565) another unexpected
fact: it's impossible to output a container of std::pair's using copy
and an ostream_iterator, as long as both pair-members are built-in or
std:: types. That's because a user-defined operator<< for (for
example) std::pair<const std::string, int> will not be found:
lookup for operator<< will be performed only in namespace std.
Opinions differed on whether or not this was a defect, and, if so,
whether the defect is that something is wrong with user-defined
functionality and std, or whether it's that the standard library does
not provide an operator<< for std::pair<>.

[Tokyo: Summary, "There is no conforming way to extend
std::swap for user defined templates." The LWG agrees that
there is a problem. Would like more information before
proceeding. This may be a core issue. Core issue 229 has been opened
to discuss the core aspects of this problem. It was also noted that
submissions regarding this issue have been received from several
sources, but too late to be integrated into the issues list.
]

[Post-Tokyo: A paper with several proposed resolutions,
J16/00-0029==WG21/N1252, "Shades of namespace std functions
" by Alan Griffiths, is in the Post-Tokyo mailing. It
should be considered a part of this issue.]

[Toronto: Dave Abrahams and Peter Dimov have proposed a
resolution that involves core changes: it would add partial
specialization of function template. The Core Working Group is
reluctant to add partial specialization of function templates. It is
viewed as a large change, CWG believes that proposal presented leaves
some syntactic issues unanswered; if the CWG does add partial
specialization of function templates, it wishes to develop its own
proposal. The LWG continues to believe that there is a serious
problem: there is no good way for users to force the library to use
user specializations of generic standard library functions, and in
certain cases (e.g. transcendental functions called by
valarray and complex) this is important. Koenig
lookup isn't adequate, since names within the library must be
qualified with std (see issue 225), specialization doesn't
work (we don't have partial specialization of function templates), and
users aren't permitted to add overloads within namespace std.
]

[Copenhagen: Discussed at length, with no consensus. Relevant
papers in the pre-Copenhagen mailing: N1289, N1295, N1296. Discussion
focused on four options. (1) Relax restrictions on overloads within
namespace std. (2) Mandate that the standard library use unqualified
calls for swap and possibly other functions. (3) Introduce
helper class templates for swap and possibly other functions.
(4) Introduce partial specialization of function templates. Every
option had both support and opposition. Straw poll (first number is
support, second is strongly opposed): (1) 6, 4; (2) 6, 7; (3) 3, 8;
(4) 4, 4.]

[Redmond: Discussed, again no consensus. Herb presented an
argument that a user who is defining a type T with an
associated swap should not be expected to put that
swap in namespace std, either by overloading or by partial
specialization. The argument is that swap is part of
T's interface, and thus should to in the same namespace as
T and only in that namespace. If we accept this argument,
the consequence is that standard library functions should use
unqualified call of swap. (And which other functions? Any?)
A small group (Nathan, Howard, Jeremy, Dave, Matt, Walter, Marc) will
try to put together a proposal before the next meeting.]

[Curaçao: An LWG-subgroup spent an afternoon working on issues
225, 226, and 229. Their conclusion was that the issues should be
separated into an LWG portion (Howard's paper, N1387=02-0045), and a
EWG portion (Dave will write a proposal). The LWG and EWG had
(separate) discussions of this plan the next day. The proposed
resolution is the one proposed by Howard.]

[Santa Cruz: the LWG agreed with the general direction of
Howard's paper, N1387. (Roughly: Koenig lookup is disabled unless
we say otherwise; this issue is about when we do say otherwise.)
However, there were concerns about wording. Howard will provide new
wording. Bill and Jeremy will review it.]

[Kona: Howard proposed the new wording. The LWG accepted his
proposed resolution.]

Rationale:

Informally: introduce a Swappable concept, and specify that the
value types of the iterators passed to certain standard algorithms
(such as iter_swap, swap_ranges, reverse, rotate, and sort) conform
to that concept. The Swappable concept will make it clear that
these algorithms use unqualified lookup for the calls
to swap. Also, in 26.6.3.3 [valarray.transcend] paragraph 1,
state that the valarray transcendentals use unqualified lookup.

228. Incorrect specification of "..._byname" facets

The sections 22.4.1.2 [locale.ctype.byname], 22.4.1.5 [locale.codecvt.byname],
sref ref="22.2.1.6", 22.4.3.2 [locale.numpunct.byname], 22.4.4.2 [locale.collate.byname], 22.4.5.4 [locale.time.put.byname], 22.4.6.4 [locale.moneypunct.byname], and 22.4.7.2 [locale.messages.byname] overspecify the
definitions of the "..._byname" classes by listing a bunch
of virtual functions. At the same time, no semantics of these
functions are defined. Real implementations do not define these
functions because the functional part of the facets is actually
implemented in the corresponding base classes and the constructor of
the "..._byname" version just provides suitable date used by
these implementations. For example, the 'numpunct' methods just return
values from a struct. The base class uses a statically initialized
struct while the derived version reads the contents of this struct
from a table. However, no virtual function is defined in
'numpunct_byname'.

For most classes this does not impose a problem but specifically
for 'ctype' it does: The specialization for 'ctype_byname<char>'
is required because otherwise the semantics would change due to the
virtual functions defined in the general version for 'ctype_byname':
In 'ctype<char>' the method 'do_is()' is not virtual but it is
made virtual in both 'ctype<cT>' and 'ctype_byname<cT>'.
Thus, a class derived from 'ctype_byname<char>' can tell whether
this class is specialized or not under the current specification:
Without the specialization, 'do_is()' is virtual while with
specialization it is not virtual.

229. Unqualified references of other library entities

Throughout the library chapters, the descriptions of library entities refer
to other library entities without necessarily qualifying the names.

For example, section 25.2.2 "Swap" describes the effect of
swap_ranges in terms of the unqualified name "swap". This section
could reasonably be interpreted to mean that the library must be implemented so
as to do a lookup of the unqualified name "swap", allowing users to
override any ::std::swap function when Koenig lookup applies.

Although it would have been best to use explicit qualification with
"::std::" throughout, too many lines in the standard would have to be
adjusted to make that change in a Technical Corrigendum.

Issue 182, which addresses qualification of
size_t, is a special case of this.

Proposed resolution:

To section 17.4.1.1 "Library contents" Add the following paragraph:

Whenever a name x defined in the standard library is mentioned, the name x
is assumed to be fully qualified as ::std::x, unless explicitly described
otherwise. For example, if the Effects section for library function F is
described as calling library function G, the function ::std::G is meant.

[Post-Tokyo: Steve Clamage submitted this issue at the request of
the LWG to solve a problem in the standard itself similar to the
problem within implementations of library identified by issue 225. Any resolution of issue 225 should be
coordinated with the resolution of this issue.]

[post-Toronto: Howard is undecided about whether it is
appropriate for all standard library function names referred to in
other standard library functions to be explicitly qualified by
std: it is common advice that users should define global
functions that operate on their class in the same namespace as the
class, and this requires argument-dependent lookup if those functions
are intended to be called by library code. Several LWG members are
concerned that valarray appears to require argument-dependent lookup,
but that the wording may not be clear enough to fall under
"unless explicitly described otherwise".]

[Curaçao: An LWG-subgroup spent an afternoon working on issues
225, 226, and 229. Their conclusion was that the issues should be
separated into an LWG portion (Howard's paper, N1387=02-0045), and a
EWG portion (Dave will write a proposal). The LWG and EWG had
(separate) discussions of this plan the next day. This paper resolves
issues 225 and 226. In light of that resolution, the proposed
resolution for the current issue makes sense.]

230. Assignable specified without also specifying CopyConstructible

Issue 227 identified an instance (std::swap) where
Assignable was specified without also specifying
CopyConstructible. The LWG asked that the standard be searched to
determine if the same defect existed elsewhere.

There are a number of places (see proposed resolution below) where
Assignable is specified without also specifying
CopyConstructible. There are also several cases where both are
specified. For example, 26.5.1 [rand.req].

Proposed resolution:

In 23.2 [container.requirements] table 65 for value_type:
change "T is Assignable" to "T is CopyConstructible and
Assignable"

In 23.2.4 [associative.reqmts] table 69 X::key_type; change
"Key is Assignable" to "Key is
CopyConstructible and Assignable"

In 24.2.4 [output.iterators] paragraph 1, change:

A class or a built-in type X satisfies the requirements of an
output iterator if X is an Assignable type (23.1) and also the
following expressions are valid, as shown in Table 73:

to:

A class or a built-in type X satisfies the requirements of an
output iterator if X is a CopyConstructible (20.1.3) and Assignable
type (23.1) and also the following expressions are valid, as shown in
Table 73:

[Post-Tokyo: Beman Dawes submitted this issue at the request of
the LWG. He asks that the 25.3.5 [alg.replace] and 25.3.6 [alg.fill] changes be studied carefully, as it is not clear that
CopyConstructible is really a requirement and may be
overspecification.]

[Portions of the resolution for issue 230 have been superceded by
the resolution of issue 276.]

Rationale:

The original proposed resolution also included changes to input
iterator, fill, and replace. The LWG believes that those changes are
not necessary. The LWG considered some blanket statement, where an
Assignable type was also required to be Copy Constructible, but
decided against this because fill and replace really don't require the
Copy Constructible property.

From my C experience, I would expect "1e+00"; this is what
printf("%.0e" , 1.00 ); does. G++ outputs
"1.000000e+00".

The only indication I can find in the standard is 22.2.2.2.2/11,
where it says "For conversion from a floating-point type, if
(flags & fixed) != 0 or if str.precision() > 0, then
str.precision() is specified in the conversion specification."
This is an obvious error, however, fixed is not a mask for a field,
but a value that a multi-bit field may take -- the results of and'ing
fmtflags with ios::fixed are not defined, at least not if
ios::scientific has been set. G++'s behavior corresponds to what might
happen if you do use (flags & fixed) != 0 with a typical
implementation (floatfield == 3 << something, fixed == 1
<< something, and scientific == 2 << something).

Presumably, the intent is either (flags & floatfield) != 0, or
(flags & floatfield) == fixed; the first gives something more or
less like the effect of precision in a printf floating point
conversion. Only more or less, of course. In order to implement printf
formatting correctly, you must know whether the precision was
explicitly set or not. Say by initializing it to -1, instead of 6, and
stating that for floating point conversions, if precision < -1, 6
will be used, for fixed point, if precision < -1, 1 will be used,
etc. Plus, of course, if precision == 0 and flags & floatfield ==
0, 1 should be = used. But it probably isn't necessary to emulate all
of the anomalies of printf:-).

Proposed resolution:

Replace 22.4.2.2.2 [facet.num.put.virtuals], paragraph 11, with the following
sentence:

For conversion from a floating-point type,
str.precision() is specified in the conversion
specification.

Rationale:

The floatfield determines whether numbers are formatted as if
with %f, %e, or %g. If the fixed bit is set, it's %f,
if scientific it's %e, and if both bits are set, or
neither, it's %g.

Turning to the C standard, a precision of 0 is meaningful
for %f and %e. For %g, precision 0 is taken to be the same as
precision 1.

The proposed resolution has the effect that if neither
fixed nor scientific is set we'll be
specifying a precision of 0, which will be internally
turned into 1. There's no need to call it out as a special
case.

The output of the above program will be "1e+00".

[Post-Curaçao: Howard provided improved wording covering the case
where precision is 0 and mode is %g.]

232. "depends" poorly defined in 17.4.3.1

17.4.3.1/1 uses the term "depends" to limit the set of allowed
specializations of standard templates to those that "depend on a
user-defined name of external linkage."

This term, however, is not adequately defined, making it possible to
construct a specialization that is, I believe, technically legal according to
17.4.3.1/1, but that specializes a standard template for a built-in type such as
'int'.

This terminology is used in section 2.5.2 and 4.1.1 of The C++
Programming Language. It disallows the example in the issue,
since the underlying type itself is not user-defined. The only
possible problem I can see is for non-type templates, but there's no
possible way for a user to come up with a specialization for bitset,
for example, that might not have already been specialized by the
implementor?

If mm is a multimap and p is an iterator
into the multimap, then mm.insert(p, x) inserts
x into mm with p as a hint as
to where it should go. Table 69 claims that the execution time is
amortized constant if the insert winds up taking place adjacent to
p, but does not say when, if ever, this is guaranteed to
happen. All it says it that p is a hint as to where to
insert.

The question is whether there is any guarantee about the relationship
between p and the insertion point, and, if so, what it
is.

I believe the present state is that there is no guarantee: The user
can supply p, and the implementation is allowed to
disregard it entirely.

Additional comments from Nathan:
The vote [in Redmond] was on whether to elaborately specify the use of
the hint, or to require behavior only if the value could be inserted
adjacent to the hint. I would like to ensure that we have a chance to
vote for a deterministic treatment: "before, if possible, otherwise
after, otherwise anywhere appropriate", as an alternative to the
proposed "before or after, if possible, otherwise [...]".

[Toronto: there was general agreement that this is a real defect:
when inserting an element x into a multiset that already contains
several copies of x, there is no way to know whether the hint will be
used. The proposed resolution was that the new element should always
be inserted as close to the hint as possible. So, for example, if
there is a subsequence of equivalent values, then providing a.begin()
as the hint means that the new element should be inserted before the
subsequence even if a.begin() is far away. JC van Winkel supplied
precise wording for this proposed resolution, and also for an
alternative resolution in which hints are only used when they are
adjacent to the insertion point.]

[Copenhagen: the LWG agreed to the original proposed resolution,
in which an insertion hint would be used even when it is far from the
insertion point. This was contingent on seeing a example
implementation showing that it is possible to implement this
requirement without loss of efficiency. John Potter provided such a
example implementation.]

[Redmond: The LWG was reluctant to adopt the proposal that
emerged from Copenhagen: it seemed excessively complicated, and went
beyond fixing the defect that we identified in Toronto. PJP provided
the new wording described in this issue. Nathan agrees that we
shouldn't adopt the more detailed semantics, and notes: "we know that
you can do it efficiently enough with a red-black tree, but there are
other (perhaps better) balanced tree techniques that might differ
enough to make the detailed semantics hard to satisfy."]

[Curaçao: Nathan should give us the alternative wording he
suggests so the LWG can decide between the two options.]

[Lillehammer: The LWG previously rejected the more detailed
semantics, because it seemed more loike a new feature than like
defect fixing. We're now more sympathetic to it, but we (especially
Bill) are still worried about performance. N1780 describes a naive
algorithm, but it's not clear whether there is a non-naive
implementation. Is it possible to implement this as efficently as
the current version of insert?]

[
Batavia:
1780
accepted with minor wording changes in the proposed wording (reflected in the
proposed resolution below). Concerns about the performance of the algorithm
were satisfactorily met by
1780.
371 already handles the stability of equal ranges
and so that part of the resolution from
1780
is no longer needed (or reflected in the proposed wording below).
]

inserts t and returns the iterator pointing to the newly inserted
element. If a range containing elements equivalent to t exists in
a_eq, t is inserted at the end of that range.

logarithmic

a.insert(p,t)

iterator

inserts t if and only if there is no element with key equivalent to the
key of t in containers with unique keys; always inserts t in containers
with equivalent keys. always returns the iterator pointing to the element with key
equivalent to the key of t. iterator p is a hint pointing to where
the insert should start to search.t is inserted as close as possible
to the position just prior to p.

logarithmic in general, but amortized constant if t is inserted right afterbeforep.

235. No specification of default ctor for reverse_iterator

The declaration of reverse_iterator lists a default
constructor. However, no specification is given what this constructor
should do.

Proposed resolution:

In section 24.5.1.3.1 [reverse.iter.cons] add the following
paragraph:

reverse_iterator()

Default initializes current. Iterator operations
applied to the resulting iterator have defined behavior if and
only if the corresponding operations are defined on a default
constructed iterator of type Iterator.

[pre-Copenhagen: Dietmar provide wording for proposed
resolution.]

237. Undefined expression in complexity specification

The complexity specification in paragraph 6 says that the complexity
is linear in first - last. Even if operator-() is
defined on iterators this term is in general undefined because it
would have to be last - first.

Paragraph 3 of 27.7.1.1 basically says that in this case neither
the output sequence nor the input sequence is initialized and
paragraph 2 of 27.7.1.2 basically says that str() either
returns the input or the output sequence. None of them is initialized,
ie. both are empty, in which case the return from str() is
defined to be basic_string<cT>().

However, probably only test cases in some testsuites will detect this
"problem"...

Proposed resolution:

Remove 27.7.1.1 paragraph 4.

Rationale:

We could fix 27.7.1.1 paragraph 4, but there would be no point. If
we fixed it, it would say just the same thing as text that's already
in the standard.

239. Complexity of unique() and/or unique_copy incorrect

The complexity of unique and unique_copy are inconsistent with each
other and inconsistent with the implementations. The standard
specifies:

for unique():

-3- Complexity: If the range (last - first) is not empty, exactly
(last - first) - 1 applications of the corresponding predicate, otherwise
no applications of the predicate.

for unique_copy():

-7- Complexity: Exactly last - first applications of the corresponding
predicate.

The implementations do it the other way round: unique() applies the
predicate last-first times and unique_copy() applies it last-first-1
times.

As both algorithms use the predicate for pair-wise comparison of
sequence elements I don't see a justification for unique_copy()
applying the predicate last-first times, especially since it is not
specified to which pair in the sequence the predicate is applied
twice.

Proposed resolution:

Change both complexity sections in 25.3.9 [alg.unique] to:

Complexity: For nonempty ranges, exactly last - first - 1
applications of the corresponding predicate.

-1- Returns: The first iterator i such that both i and i + 1 are in
the range [first, last) for which the following corresponding
conditions hold: *i == *(i + 1), pred(*i, *(i + 1)) != false. Returns
last if no such iterator is found.

In the Complexity section, it is not defined what "value"
is supposed to mean. My best guess is that "value" means an
object for which one of the conditions pred(*i,value) or
pred(value,*i) is true, where i is the iterator defined in the Returns
section. However, the value type of the input sequence need not be
equality-comparable and for this reason the term find(first, last,
value) - first is meaningless.

A term such as find_if(first, last, bind2nd(pred,*i)) - first or
find_if(first, last, bind1st(pred,*i)) - first might come closer to
the intended specification. Binders can only be applied to function
objects that have the function call operator declared const, which is
not required of predicates because they can have non-const data
members. For this reason, a specification using a binder could only be
an "as-if" specification.

241. Does unique_copy() require CopyConstructible and Assignable?

Some popular implementations of unique_copy() create temporary
copies of values in the input sequence, at least if the input iterator
is a pointer. Such an implementation is built on the assumption that
the value type is CopyConstructible and Assignable.

It is common practice in the standard that algorithms explicitly
specify any additional requirements that they impose on any of the
types used by the algorithm. An example of an algorithm that creates
temporary copies and correctly specifies the additional requirements
is accumulate(), 26.5.1 [rand.req].

Since the specifications of unique() and unique_copy() do not
require CopyConstructible and Assignable of the InputIterator's value
type the above mentioned implementations are not standard-compliant. I
cannot judge whether this is a defect in the standard or a defect in
the implementations.

-4- Requires: The ranges [first, last) and [result,
result+(last-first)) shall not overlap. The expression *result =
*first must be valid. If neither InputIterator nor OutputIterator
meets the requirements of forward iterator then the value type of
InputIterator must be copy constructible. Otherwise copy
constructible is not required.

[Redmond: the original proposed resolution didn't impose an
explicit requirement that the iterator's value type must be copy
constructible, on the grounds that an input iterator's value type must
always be copy constructible. Not everyone in the LWG thought that
this requirement was clear from table 72. It has been suggested that
it might be possible to implement unique_copy without
requiring assignability, although current implementations do impose
that requirement. Howard provided new wording.]

[
Curaçao: The LWG changed the PR editorially to specify
"neither...nor...meet..." as clearer than
"both...and...do not meet...". Change believed to be so
minor as not to require re-review.
]

The algorithms transform(), accumulate(), inner_product(),
partial_sum(), and adjacent_difference() require that the function
object supplied to them shall not have any side effects.

The standard defines a side effect in 1.9 [intro.execution] as:

-7- Accessing an object designated by a volatile lvalue (basic.lval),
modifying an object, calling a library I/O function, or calling a function
that does any of those operations are all side effects, which are changes
in the state of the execution environment.

As a consequence, the function call operator of a function object supplied
to any of the algorithms listed above cannot modify data members, cannot
invoke any function that has a side effect, and cannot even create and
modify temporary objects. It is difficult to imagine a function object
that is still useful under these severe limitations. For instance, any
non-trivial transformator supplied to transform() might involve creation
and modification of temporaries, which is prohibited according to the current
wording of the standard.

On the other hand, popular implementations of these algorithms exhibit
uniform and predictable behavior when invoked with a side-effect-producing
function objects. It looks like the strong requirement is not needed for
efficient implementation of these algorithms.

The requirement of side-effect-free function objects could be
replaced by a more relaxed basic requirement (which would hold for all
function objects supplied to any algorithm in the standard library):

A function objects supplied to an algorithm shall not invalidate
any iterator or sequence that is used by the algorithm. Invalidation of
the sequence includes destruction of the sorting order if the algorithm
relies on the sorting order (see section 25.3 - Sorting and related operations
[lib.alg.sorting]).

I can't judge whether it is intended that the function objects supplied
to transform(), accumulate(), inner_product(), partial_sum(), or adjacent_difference()
shall not modify sequence elements through dereferenced iterators.

It is debatable whether this issue is a defect or a change request.
Since the consequences for user-supplied function objects are drastic and
limit the usefulness of the algorithms significantly I would consider it
a defect.

Proposed resolution:

Things to notice about these changes:

The fully-closed ("[]" as opposed to half-closed "[)" ranges
are intentional. we want to prevent side-effects from
invalidating the end iterators.

That has the unintentional side-effect of prohibiting
modification of the end element as a side-effect. This could
conceivably be significant in some cases.

The wording also prevents side-effects from modifying elements
of the output sequence. I can't imagine why anyone would want
to do this, but it is arguably a restriction that implementors
don't need to place on users.

Lifting the restrictions imposed in #2 and #3 above is possible
and simple, but would require more verbiage.

basic_istream<>::get(), and basic_istream<>::getline(),
are unclear with respect to the behavior and side-effects of the named
functions in case of an error.

27.6.1.3, p1 states that "... If the sentry object returns
true, when converted to a value of type bool, the function endeavors
to obtain the requested input..." It is not clear from this (or
the rest of the paragraph) what precisely the behavior should be when
the sentry ctor exits by throwing an exception or when the sentry
object returns false. In particular, what is the number of characters
extracted that gcount() returns supposed to be?

27.6.1.3 p8 and p19 say about the effects of get() and getline():
"... In any case, it then stores a null character (using
charT()) into the next successive location of the array." Is not
clear whether this sentence applies if either of the conditions above
holds (i.e., when sentry fails).

Proposed resolution:

Add to 27.6.1.3, p1 after the sentence

"... If the sentry object returns true, when converted to a value of
type bool, the function endeavors to obtain the requested input."

the following

"Otherwise, if the sentry constructor exits by throwing an exception or
if the sentry object returns false, when converted to a value of type
bool, the function returns without attempting to obtain any input. In
either case the number of extracted characters is set to 0; unformatted
input functions taking a character array of non-zero size as an argument
shall also store a null character (using charT()) in the first location
of the array."

Rationale:

Although the general philosophy of the input functions is that the
argument should not be modified upon failure, getline
historically added a terminating null unconditionally. Most
implementations still do that. Earlier versions of the draft standard
had language that made this an unambiguous requirement; those words
were moved to a place where their context made them less clear. See
Jerry Schwarz's message c++std-lib-7618.

Paragraph 2 of 23.3.6.5 [vector.modifiers] describes the complexity
of vector::insert:

Complexity: If first and last are forward iterators, bidirectional
iterators, or random access iterators, the complexity is linear in
the number of elements in the range [first, last) plus the distance
to the end of the vector. If they are input iterators, the complexity
is proportional to the number of elements in the range [first, last)
times the distance to the end of the vector.

First, this fails to address the non-iterator forms of
insert.

Second, the complexity for input iterators misses an edge case --
it requires that an arbitrary number of elements can be added at
the end of a vector in constant time.

I looked to see if deque had a similar problem, and was
surprised to find that deque places no requirement on the
complexity of inserting multiple elements (23.3.3.4 [deque.modifiers],
paragraph 3):

Complexity: In the worst case, inserting a single element into a
deque takes time linear in the minimum of the distance from the
insertion point to the beginning of the deque and the distance
from the insertion point to the end of the deque. Inserting a
single element either at the beginning or end of a deque always
takes constant time and causes a single call to the copy constructor
of T.

Proposed resolution:

Change Paragraph 2 of 23.3.6.5 [vector.modifiers] to

Complexity: The complexity is linear in the number of elements
inserted plus the distance to the end of the vector.

[For input iterators, one may achieve this complexity by first
inserting at the end of the vector, and then using
rotate.]

Change 23.3.3.4 [deque.modifiers], paragraph 3, to:

Complexity: The complexity is linear in the number of elements
inserted plus the shorter of the distances to the beginning and
end of the deque. Inserting a single element at either the
beginning or the end of a deque causes a single call to the copy
constructor of T.

Rationale:

This is a real defect, and proposed resolution fixes it: some
complexities aren't specified that should be. This proposed
resolution does constrain deque implementations (it rules out the
most naive possible implementations), but the LWG doesn't see a
reason to permit that implementation.

248. time_get fails to set eofbit

There is no requirement that any of time_get member functions set
ios::eofbit when they reach the end iterator while parsing their input.
Since members of both the num_get and money_get facets are required to
do so (22.2.2.1.2, and 22.2.6.1.2, respectively), time_get members
should follow the same requirement for consistency.

Proposed resolution:

Add paragraph 2 to section 22.2.5.1 with the following text:

If the end iterator is reached during parsing by any of the get()
member functions, the member sets ios_base::eofbit in err.

Rationale:

Two alternative resolutions were proposed. The LWG chose this one
because it was more consistent with the way eof is described for other
input facets.

This is unnecessary and defeats an important feature of splice. In
fact, the SGI STL guarantees that iterators to x remain valid
after splice.

Proposed resolution:

Add a footnote to 23.3.5.5 [list.ops], paragraph 1:

[Footnote: As specified in [default.con.req], paragraphs
4-5, the semantics described in this clause applies only to the case
where allocators compare equal. --end footnote]

In 23.3.5.5 [list.ops], replace paragraph 4 with:

Effects: Inserts the contents of x before position and x becomes
empty. Pointers and references to the moved elements of x now refer to
those same elements but as members of *this. Iterators referring to the
moved elements will continue to refer to their elements, but they now
behave as iterators into *this, not into x.

In 23.3.5.5 [list.ops], replace paragraph 7 with:

Effects: Inserts an element pointed to by i from list x before
position and removes the element from x. The result is unchanged if
position == i or position == ++i. Pointers and references to *i continue
to refer to this same element but as a member of *this. Iterators to *i
(including i itself) continue to refer to the same element, but now
behave as iterators into *this, not into x.

In 23.3.5.5 [list.ops], replace paragraph 12 with:

Requires: [first, last) is a valid range in x. The result is
undefined if position is an iterator in the range [first, last).
Pointers and references to the moved elements of x now refer to those
same elements but as members of *this. Iterators referring to the moved
elements will continue to refer to their elements, but they now behave as
iterators into *this, not into x.

[pre-Copenhagen: Howard provided wording.]

Rationale:

The original proposed resolution said that iterators and references
would remain "valid". The new proposed resolution clarifies what that
means. Note that this only applies to the case of equal allocators.
From [default.con.req] paragraph 4, the behavior of list when
allocators compare nonequal is outside the scope of the standard.

251. basic_stringbuf missing allocator_type

The synopsis for the template class basic_stringbuf
doesn't list a typedef for the template parameter
Allocator. This makes it impossible to determine the type of
the allocator at compile time. It's also inconsistent with all other
template classes in the library that do provide a typedef for the
Allocator parameter.

Proposed resolution:

Add to the synopses of the class templates basic_stringbuf (27.7.1),
basic_istringstream (27.7.2), basic_ostringstream (27.7.3), and
basic_stringstream (27.7.4) the typedef:

typedef Allocator allocator_type;

252. missing casts/C-style casts used in iostreams

27.7.2.2, p1 uses a C-style cast rather than the more appropriate
const_cast<> in the Returns clause for basic_istringstream<>::rdbuf().
The same C-style cast is being used in 27.7.3.2, p1, D.7.2.2, p1, and
D.7.3.2, p1, and perhaps elsewhere. 27.7.6, p1 and D.7.2.2, p1 are missing
the cast altogether.

C-style casts have not been deprecated, so the first part of this
issue is stylistic rather than a matter of correctness.

Since the valarray va1 is non-const, the result of the sub-expression
va1[slice(1,4,3)] at line 1 is an rvalue of type const
std::slice_array<double>. This slice_array rvalue is then used to
construct va2. The constructor that is used to construct va2 is
declared like this:

template <class T>
valarray<T>::valarray(const slice_array<T> &);

Notice the constructor's const reference parameter. When the
constructor is called, a slice_array must be bound to this reference.
The rules for binding an rvalue to a const reference are in 8.5.3,
paragraph 5 (see also 13.3.3.1.4). Specifically, paragraph 5
indicates that a second slice_array rvalue is constructed (in this
case copy-constructed) from the first one; it is this second rvalue
that is bound to the reference parameter. Paragraph 5 also requires
that the constructor that is used for this purpose be callable,
regardless of whether the second rvalue is elided. The
copy-constructor in this case is not callable, however, because it is
private. Therefore, the compiler should report an error.

Since slice_arrays are always rvalues, the valarray constructor that has a
parameter of type const slice_array<T> & can never be called. The
same reasoning applies to the three other constructors and the four
assignment operators that are listed at the beginning of this post.
Furthermore, since these functions cannot be called, the valarray helper
classes are almost entirely useless.

Proposed resolution:

slice_array:

Make the copy constructor and copy-assignment operator declarations
public in the slice_array class template definition in 26.6.5 [template.slice.array]

remove paragraph 3 of 26.6.5 [template.slice.array]

remove the copy constructor declaration from [cons.slice.arr]

change paragraph 1 of [cons.slice.arr] to read "This constructor is declared
to be private. This constructor need not be defined."

remove the first sentence of paragraph 1 of 26.6.5.2 [slice.arr.assign]

Change the first three words of the second sentence of paragraph 1 of
26.6.5.2 [slice.arr.assign] to "These assignment operators have"

gslice_array:

Make the copy constructor and copy-assignment operator declarations
public in the gslice_array class template definition in 26.6.7 [template.gslice.array]

remove the note in paragraph 3 of 26.6.7 [template.gslice.array]

remove the copy constructor declaration from [gslice.array.cons]

change paragraph 1 of [gslice.array.cons] to read "This constructor is declared
to be private. This constructor need not be defined."

remove the first sentence of paragraph 1 of 26.6.7.2 [gslice.array.assign]

Change the first three words of the second sentence of paragraph 1 of
26.6.7.2 [gslice.array.assign] to "These assignment operators have"

mask_array:

Make the copy constructor and copy-assignment operator declarations
public in the mask_array class template definition in 26.6.8 [template.mask.array]

remove the note in paragraph 2 of 26.6.8 [template.mask.array]

remove the copy constructor declaration from [mask.array.cons]

change paragraph 1 of [mask.array.cons] to read "This constructor is declared
to be private. This constructor need not be defined."

remove the first sentence of paragraph 1 of 26.6.8.2 [mask.array.assign]

Change the first three words of the second sentence of paragraph 1 of
26.6.8.2 [mask.array.assign] to "These assignment operators have"

indirect_array:

Make the copy constructor and copy-assignment operator declarations
public in the indirect_array class definition in 26.6.9 [template.indirect.array]

remove the note in paragraph 2 of 26.6.9 [template.indirect.array]

remove the copy constructor declaration from [indirect.array.cons]

change the descriptive text in [indirect.array.cons] to read "This constructor is
declared to be private. This constructor need not be defined."

remove the first sentence of paragraph 1 of 26.6.9.2 [indirect.array.assign]

Change the first three words of the second sentence of paragraph 1 of
26.6.9.2 [indirect.array.assign] to "These assignment operators have"

Keeping the valarray constructors private is untenable. Merely
making valarray a friend of the helper classes isn't good enough,
because access to the copy constructor is checked in the user's
environment.

Making the assignment operator public is not strictly necessary to
solve this problem. A majority of the LWG (straw poll: 13-4)
believed we should make the assignment operators public, in addition
to the copy constructors, for reasons of symmetry and user
expectation.

A program which is low on memory may end up throwing
std::bad_alloc instead of out_of_range because memory runs out while
constructing the exception object.

An obvious implementation which stores a std::string data member
may end up invoking terminate() during exception unwinding because the
exception object allocates memory (or rather fails to) as it is being
copied.

There may be no cure for (1) other than changing the interface to
out_of_range, though one could reasonably argue that (1) is not a
defect. Personally I don't care that much if out-of-memory is reported
when I only have 20 bytes left, in the case when out_of_range would
have been reported. People who use exception-specifications might care
a lot, though.

There is a cure for (2), but it isn't completely obvious. I think a
note for implementors should be made in the standard. Avoiding
possible termination in this case shouldn't be left up to chance. The
cure is to use a reference-counted "string" implementation
in the exception object. I am not necessarily referring to a
std::string here; any simple reference-counting scheme for a NTBS
would do.

Further discussion, in email:

...I'm not so concerned about (1). After all, a library implementation
can add const char* constructors as an extension, and users don't
need to avail themselves of the standard exceptions, though this is
a lame position to be forced into. FWIW, std::exception and
std::bad_alloc don't require a temporary basic_string.

...I don't think the fixed-size buffer is a solution to the problem,
strictly speaking, because you can't satisfy the postcondition
strcmp(what(), what_arg.c_str()) == 0
For all values of what_arg (i.e. very long values). That means that
the only truly conforming solution requires a dynamic allocation.

Further discussion, from Redmond:

The most important progress we made at the Redmond meeting was
realizing that there are two separable issues here: the const
string& constructor, and the copy constructor. If a user writes
something like throw std::out_of_range("foo"), the const
string& constructor is invoked before anything gets thrown. The
copy constructor is potentially invoked during stack unwinding.

The copy constructor is a more serious problem, becuase failure
during stack unwinding invokes terminate. The copy
constructor must be nothrow. Curaçao: Howard thinks this
requirement may already be present.

The fundamental problem is that it's difficult to get the nothrow
requirement to work well with the requirement that the exception
objects store a string of unbounded size, particularly if you also try
to make the const string& constructor nothrow. Options discussed
include:

Limit the size of a string that exception objects are required to
throw: change the postconditions of 19.2.2 [domain.error] paragraph 3
and 19.2.6 [runtime.error] paragraph 3 to something like this:
"strncmp(what(), what_arg._str(), N) == 0, where N is an
implementation defined constant no smaller than 256".

Allow the const string& constructor to throw, but not the
copy constructor. It's the implementor's responsibility to get it
right. (An implementor might use a simple refcount class.)

Compromise between the two: an implementation is not allowed to
throw if the string's length is less than some N, but, if it doesn't
throw, the string must compare equal to the argument.

Throwing a bad_alloc while trying to construct a message for another
exception-derived class is not necessarily a bad thing. And the
bad_alloc constructor already has a no throw spec on it (18.4.2.1).

Future:

All involved would like to see const char* constructors added, but
this should probably be done for C++0X as opposed to a DR.

I believe the no throw specs currently decorating these functions
could be improved by some kind of static no throw spec checking
mechanism (in a future C++ language). As they stand, the copy
constructors might fail via a call to unexpected. I think what is
intended here is that the copy constructors can't fail.

[Pre-Sydney: reopened at the request of Howard Hinnant.
Post-Redmond: James Kanze noticed that the copy constructors of
exception-derived classes do not have nothrow clauses. Those
classes have no copy constructors declared, meaning the
compiler-generated implicit copy constructors are used, and those
compiler-generated constructors might in principle throw anything.]

[
Oxford: The proposed resolution simply addresses the issue of constructing
the exception objects with const char* and string literals without
the need to explicit include or construct a std::string.
]

-17- Before copying any parts of rhs, calls each registered callback
pair (fn,index) as (*fn)(erase_event,*this,index). After all parts but
exceptions() have been replaced, calls each callback pair that was
copied from rhs as (*fn)(copy_event,*this,index).

The name copy_event isn't defined anywhere. The intended name was
copyfmt_event.

I've been assuming (and probably everyone else has been assuming) that
allocator instances have a particular property, and I don't think that
property can be deduced from anything in Table 32.

I think we have to assume that allocator type conversion is a
homomorphism. That is, if x1 and x2 are of type X, where
X::value_type is T, and if type Y is X::template
rebind<U>::other, then Y(x1) == Y(x2) if and only if x1 == x2.

Further discussion: Howard Hinnant writes, in lib-7757:

I think I can prove that this is not provable by Table 32. And I agree
it needs to be true except for the "and only if". If x1 != x2, I see no
reason why it can't be true that Y(x1) == Y(x2). Admittedly I can't
think of a practical instance where this would happen, or be valuable.
But I also don't see a need to add that extra restriction. I think we
only need:

if (x1 == x2) then Y(x1) == Y(x2)

If we decide that == on allocators is transitive, then I think I can
prove the above. But I don't think == is necessarily transitive on
allocators. That is:

[Toronto: LWG members offered multiple opinions. One
opinion is that it should not be required that x1 == x2
implies Y(x1) == Y(x2), and that it should not even be
required that X(x1) == x1. Another opinion is that
the second line from the bottom in table 32 already implies the
desired property. This issue should be considered in light of
other issues related to allocator instances.]

[Lillehammer: Same conclusion as before: this should be
considered as part of an allocator redesign, not solved on its own.]

[
Batavia: An allocator redesign is not forthcoming and thus we fixed this one issue.
]

[
Toronto: Reopened at the request of the project editor (Pete) because the proposed
wording did not fit within the indicated table. The intent of the resolution remains
unchanged. Pablo to work with Pete on improved wording.
]

[
Kona (2007): The LWG adopted the proposed resolution of N2387 for this issue which
was subsequently split out into a separate paper N2436 for the purposes of voting.
The resolution in N2436 addresses this issue. The LWG voted to accelerate this
issue to Ready status to be voted into the WP at Kona.
]

The standard's description of basic_string<>::operator[]
seems to violate const correctness.

The standard (21.3.4/1) says that "If pos < size(),
returns data()[pos]." The types don't work. The
return value of data() is const charT*, but
operator[] has a non-const version whose return type is reference.

The synopsis of istream_iterator::operator++(int) in 24.5.1 shows
it as returning the iterator by value. 24.5.1.2, p5 shows the same
operator as returning the iterator by reference. That's incorrect
given the Effects clause below (since a temporary is returned). The
`&' is probably just a typo.

263. Severe restriction on basic_string reference counting

The note in paragraph 6 suggests that the invalidation rules for
references, pointers, and iterators in paragraph 5 permit a reference-
counted implementation (actually, according to paragraph 6, they permit
a "reference counted implementation", but this is a minor editorial fix).

However, the last sub-bullet is so worded as to make a reference-counted
implementation unviable. In the following example none of the
conditions for iterator invalidation are satisfied:

I have tested this on three string implementations, two of which were
reference counted. The reference-counted implementations gave
"surprising behavior" because they invalidated iterators on
the first call to non-const begin since construction. The current
wording does not permit such invalidation because it does not take
into account the first call since construction, only the first call
since various member and non-member function calls.

Proposed resolution:

Change the following sentence in 21.3 paragraph 5 from

Subsequent to any of the above uses except the forms of insert() and
erase() which return iterators, the first call to non-const member
functions operator[](), at(), begin(), rbegin(), end(), or rend().

to

Following construction or any of the above uses, except the forms of
insert() and erase() that return iterators, the first call to non-
const member functions operator[](), at(), begin(), rbegin(), end(),
or rend().

Table 69 requires linear time if [i, j) is sorted. Sorted is necessary but not sufficient.
Consider inserting a sorted range of even integers into a set<int> containing the odd
integers in the same range.

In Table 69, in section 23.1.2, change the complexity clause for
insertion of a range from "N log(size() + N) (N is the distance
from i to j) in general; linear if [i, j) is sorted according to
value_comp()" to "N log(size() + N), where N is the distance
from i to j".

Testing for valid insertions could be less efficient than simply
inserting the elements when the range is not both sorted and between
two adjacent existing elements; this could be a QOI issue.

The LWG considered two other options: (a) specifying that the
complexity was linear if [i, j) is sorted according to value_comp()
and between two adjacent existing elements; or (b) changing to
Klog(size() + N) + (N - K) (N is the distance from i to j and K is the
number of elements which do not insert immediately after the previous
element from [i, j) including the first). The LWG felt that, since
we can't guarantee linear time complexity whenever the range to be
inserted is sorted, it's more trouble than it's worth to say that it's
linear in some special cases.

265. std::pair::pair() effects overly restrictive

I don't see any requirements on the types of the elements of the
std::pair container in 20.2.2. From the descriptions of the member
functions it appears that they must at least satisfy the requirements of
20.1.3 [lib.copyconstructible] and 20.1.4 [lib.default.con.req], and in
the case of the [in]equality operators also the requirements of 20.1.1
[lib.equalitycomparable] and 20.1.2 [lib.lessthancomparable].

I believe that the the CopyConstructible requirement is unnecessary in
the case of 20.2.2, p2.

Proposed resolution:

Change the Effects clause in 20.2.2, p2 from

-2- Effects: Initializes its members as if implemented: pair() :
first(T1()), second(T2()) {}

to

-2- Effects: Initializes its members as if implemented: pair() :
first(), second() {}

Rationale:

The existing specification of pair's constructor appears to be a
historical artifact: there was concern that pair's members be properly
zero-initialized when they are built-in types. At one time there was
uncertainty about whether they would be zero-initialized if the
default constructor was written the obvious way. This has been
clarified by core issue 178, and there is no longer any doubt that
the straightforward implementation is correct.

This is a general problem with the exception classes in clause 18.
The proposed resolution is to remove the destructors from the class
synopses, rather than to document the destructors' behavior, because
removing them is more consistent with how exception classes are
described in clause 19.

268. Typo in locale synopsis

The synopsis of the class std::locale in 22.1.1 contains two typos:
the semicolons after the declarations of the default ctor
locale::locale() and the copy ctor locale::locale(const locale&)
are missing.

Each of the four binary search algorithms (lower_bound, upper_bound,
equal_range, binary_search) has a form that allows the user to pass a
comparison function object. According to 25.3, paragraph 2, that
comparison function object has to be a strict weak ordering.

This requirement is slightly too strict. Suppose we are searching
through a sequence containing objects of type X, where X is some
large record with an integer key. We might reasonably want to look
up a record by key, in which case we would want to write something
like this:

key_comp is not a strict weak ordering, but there is no reason to
prohibit its use in lower_bound.

There's no difficulty in implementing lower_bound so that it allows
the use of something like key_comp. (It will probably work unless an
implementor takes special pains to forbid it.) What's difficult is
formulating language in the standard to specify what kind of
comparison function is acceptable. We need a notion that's slightly
more general than that of a strict weak ordering, one that can encompass
a comparison function that involves different types. Expressing that
notion may be complicated.

Additional questions raised at the Toronto meeting:

Do we really want to specify what ordering the implementor must
use when calling the function object? The standard gives
specific expressions when describing these algorithms, but it also
says that other expressions (with different argument order) are
equivalent.

If we are specifying ordering, note that the standard uses both
orderings when describing equal_range.

Are we talking about requiring these algorithms to work properly
when passed a binary function object whose two argument types
are not the same, or are we talking about requirements when
they are passed a binary function object with several overloaded
versions of operator()?

The definition of a strict weak ordering does not appear to give
any guidance on issues of overloading; it only discusses expressions,
and all of the values in these expressions are of the same type.
Some clarification would seem to be in order.

Additional discussion from Copenhagen:

It was generally agreed that there is a real defect here: if
the predicate is merely required to be a Strict Weak Ordering, then
it's possible to pass in a function object with an overloaded
operator(), where the version that's actually called does something
completely inappropriate. (Such as returning a random value.)

An alternative formulation was presented in a paper distributed by
David Abrahams at the meeting, "Binary Search with Heterogeneous
Comparison", J16-01/0027 = WG21 N1313: Instead of viewing the
predicate as a Strict Weak Ordering acting on a sorted sequence, view
the predicate/value pair as something that partitions a sequence.
This is almost equivalent to saying that we should view binary search
as if we are given a unary predicate and a sequence, such that f(*p)
is true for all p below a specific point and false for all p above it.
The proposed resolution is based on that alternative formulation.

Proposed resolution:

Change 25.3 [lib.alg.sorting] paragraph 3 from:

3 For all algorithms that take Compare, there is a version that uses
operator< instead. That is, comp(*i, *j) != false defaults to *i <
*j != false. For the algorithms to work correctly, comp has to
induce a strict weak ordering on the values.

to:

3 For all algorithms that take Compare, there is a version that uses
operator< instead. That is, comp(*i, *j) != false defaults to *i
< *j != false. For algorithms other than those described in
lib.alg.binary.search (25.3.3) to work correctly, comp has to induce
a strict weak ordering on the values.

Add the following paragraph after 25.3 [lib.alg.sorting] paragraph 5:

-6- A sequence [start, finish) is partitioned with respect to an
expression f(e) if there exists an integer n such that
for all 0 <= i < distance(start, finish), f(*(begin+i)) is true if
and only if i < n.

Change 25.3.3 [lib.alg.binary.search] paragraph 1 from:

-1- All of the algorithms in this section are versions of binary
search and assume that the sequence being searched is in order
according to the implied or explicit comparison function. They work
on non-random access iterators minimizing the number of
comparisons, which will be logarithmic for all types of
iterators. They are especially appropriate for random access
iterators, because these algorithms do a logarithmic number of
steps through the data structure. For non-random access iterators
they execute a linear number of steps.

to:

-1- All of the algorithms in this section are versions of binary
search and assume that the sequence being searched is partitioned
with respect to an expression formed by binding the search key to
an argument of the implied or explicit comparison function. They
work on non-random access iterators minimizing the number of
comparisons, which will be logarithmic for all types of
iterators. They are especially appropriate for random access
iterators, because these algorithms do a logarithmic number of
steps through the data structure. For non-random access iterators
they execute a linear number of steps.

Change 25.3.3.1 [lib.lower.bound] paragraph 1 from:

-1- Requires: Type T is LessThanComparable
(lib.lessthancomparable).

to:

-1- Requires: The elements e of [first, last) are partitioned with
respect to the expression e < value or comp(e, value)

Remove 25.3.3.1 [lib.lower.bound] paragraph 2:

-2- Effects: Finds the first position into which value can be
inserted without violating the ordering.

Change 25.3.3.2 [lib.upper.bound] paragraph 1 from:

-1- Requires: Type T is LessThanComparable (lib.lessthancomparable).

to:

-1- Requires: The elements e of [first, last) are partitioned with
respect to the expression !(value < e) or !comp(value, e)

Remove 25.3.3.2 [lib.upper.bound] paragraph 2:

-2- Effects: Finds the furthermost position into which value can be
inserted without violating the ordering.

[Redmond: Minor changes in wording. (Removed "non-negative", and
changed the "other than those described in" wording.) Also, the LWG
decided to accept the "optional" part.]

Rationale:

The proposed resolution reinterprets binary search. Instead of
thinking about searching for a value in a sorted range, we view that
as an important special case of a more general algorithm: searching
for the partition point in a partitioned range.

We also add a guarantee that the old wording did not: we ensure
that the upper bound is no earlier than the lower bound, that
the pair returned by equal_range is a valid range, and that the first
part of that pair is the lower bound.

273. Missing ios_base qualification on members of a dependent class

27.5.2.4.2, p4, and 27.8.1.6, p2, 27.8.1.7, p3, 27.8.1.9, p2,
27.8.1.10, p3 refer to in and/or out w/o ios_base:: qualification.
That's incorrect since the names are members of a dependent base
class (14.6.2 [temp.dep]) and thus not visible.

Proposed resolution:

Qualify the names with the name of the class of which they are
members, i.e., ios_base.

I see that table 31 in 20.1.5, p3 allows T in std::allocator<T> to be of
any type. But the synopsis in 20.4.1 calls for allocator<>::address() to
be overloaded on reference and const_reference, which is ill-formed for
all T = const U. In other words, this won't work:

template class std::allocator<const int>;

The obvious solution is to disallow specializations of allocators on
const types. However, while containers' elements are required to be
assignable (which rules out specializations on const T's), I think that
allocators might perhaps be potentially useful for const values in other
contexts. So if allocators are to allow const types a partial
specialization of std::allocator<const T> would probably have to be
provided.

Two resolutions were originally proposed: one that partially
specialized std::allocator for const types, and one that said an
allocator's value type may not be const. The LWG chose the second.
The first wouldn't be appropriate, because allocators are intended for
use by containers, and const value types don't work in containers.
Encouraging the use of allocators with const value types would only
lead to unsafe code.

The original text for proposed resolution 2 was modified so that it
also forbids volatile types and reference types.

[Curaçao: LWG double checked and believes volatile is correctly
excluded from the PR.]

23.1/3 states that the objects stored in a container must be
Assignable. 23.4.4 [map], paragraph 2,
states that map satisfies all requirements for a container, while in
the same time defining value_type as pair<const Key, T> - a type
that is not Assignable.

It should be noted that there exists a valid and non-contradictory
interpretation of the current text. The wording in 23.1/3 avoids
mentioning value_type, referring instead to "objects stored in a
container." One might argue that map does not store objects of
type map::value_type, but of map::mapped_type instead, and that the
Assignable requirement applies to map::mapped_type, not
map::value_type.

However, this makes map a special case (other containers store objects of
type value_type) and the Assignable requirement is needlessly restrictive in
general.

For example, the proposed resolution of active library issue
103 is to make set::iterator a constant iterator; this
means that no set operations can exploit the fact that the stored
objects are Assignable.

-3- The type of objects stored in these components must meet the
requirements of CopyConstructible types (lib.copyconstructible).

23.1/4: Modify to make clear that this requirement is not for all
containers. Change to:

-4- Table 64 defines the Assignable requirement. Some containers
require this property of the types to be stored in the container. T is
the type used to instantiate the container. t is a value of T, and u is
a value of (possibly const) T.

23.1, Table 65: in the first row, change "T is Assignable" to "T is
CopyConstructible".

23.2.1/2: Add sentence for Assignable requirement. Change to:

-2- A deque satisfies all of the requirements of a container and of a
reversible container (given in tables in lib.container.requirements) and
of a sequence, including the optional sequence requirements
(lib.sequence.reqmts). In addition to the requirements on the stored
object described in 23.1[lib.container.requirements], the stored object
must also meet the requirements of Assignable. Descriptions are
provided here only for operations on deque that are not described in one
of these tables or for operations where there is additional semantic
information.

-2- A list satisfies all of the requirements of a container and of a
reversible container (given in two tables in lib.container.requirements)
and of a sequence, including most of the the optional sequence
requirements (lib.sequence.reqmts). The exceptions are the operator[]
and at member functions, which are not provided.
[Footnote: These member functions are only provided by containers whose
iterators are random access iterators. --- end foonote]

list does not require the stored type T to be Assignable unless the
following methods are instantiated:
[Footnote: Implementors are permitted but not required to take advantage
of T's Assignable properties for these methods. -- end foonote]

Descriptions are provided here only for operations on list that are not
described in one of these tables or for operations where there is
additional semantic information.

23.2.4/2: Add sentence for Assignable requirement. Change to:

-2- A vector satisfies all of the requirements of a container and of a
reversible container (given in two tables in lib.container.requirements)
and of a sequence, including most of the optional sequence requirements
(lib.sequence.reqmts). The exceptions are the push_front and pop_front
member functions, which are not provided. In addition to the
requirements on the stored object described in
23.1[lib.container.requirements], the stored object must also meet the
requirements of Assignable. Descriptions are provided here only for
operations on vector that are not described in one of these tables or
for operations where there is additional semantic information.

Rationale:

list, set, multiset, map, multimap are able to store non-Assignables.
However, there is some concern about list<T>:
although in general there's no reason for T to be Assignable, some
implementations of the member functions operator= and
assign do rely on that requirement. The LWG does not want
to forbid such implementations.

Note that the type stored in a standard container must still satisfy
the requirements of the container's allocator; this rules out, for
example, such types as "const int". See issue 274
for more details.

In principle we could also relax the "Assignable" requirement for
individual vector member functions, such as
push_back. However, the LWG did not see great value in such
selective relaxation. Doing so would remove implementors' freedom to
implement vector::push_back in terms of
vector::insert.

But what does the C++ Standard mean by "invalidate"? You
can still dereference the iterator to a spliced list element, but
you'd better not use it to delimit a range within the original
list. For the latter operation, it has definitely lost some of its
validity.

If we accept the proposed resolution to issue 250,
then we'd better clarify that a "valid" iterator need no
longer designate an element within the same container as it once did.
We then have to clarify what we mean by invalidating a past-the-end
iterator, as when a vector or string grows by reallocation. Clearly,
such an iterator has a different kind of validity. Perhaps we should
introduce separate terms for the two kinds of "validity."

Proposed resolution:

Add the following text to the end of section X [iterator.concepts],
after paragraph 5:

An invalid iterator is an iterator that may be
singular. [Footnote: This definition applies to pointers, since
pointers are iterators. The effect of dereferencing an iterator that
has been invalidated is undefined.]

[post-Copenhagen: Matt provided wording.]

[Redmond: General agreement with the intent, some objections to
the wording. Dave provided new wording.]

Rationale:

This resolution simply defines a term that the Standard uses but
never defines, "invalid", in terms of a term that is defined,
"singular".

Why do we say "may be singular", instead of "is singular"? That's
becuase a valid iterator is one that is known to be nonsingular.
Invalidating an iterator means changing it in such a way that it's
no longer known to be nonsingular. An example: inserting an
element into the middle of a vector is correctly said to invalidate
all iterators pointing into the vector. That doesn't necessarily
mean they all become singular.

280. Comparison of reverse_iterator to const reverse_iterator

This came from an email from Steve Cleary to Fergus in reference to
issue 179. The library working group briefly discussed
this in Toronto and believed it should be a separate issue. There was
also some reservations about whether this was a worthwhile problem to
fix.

Steve said: "Fixing reverse_iterator. std::reverse_iterator can
(and should) be changed to preserve these additional
requirements." He also said in email that it can be done without
breaking user's code: "If you take a look at my suggested
solution, reverse_iterator doesn't have to take two parameters; there
is no danger of breaking existing code, except someone taking the
address of one of the reverse_iterator global operator functions, and
I have to doubt if anyone has ever done that. . . But, just in
case they have, you can leave the old global functions in as well --
they won't interfere with the two-template-argument functions. With
that, I don't see how any user code could break."

Also make the addition/changes for these signatures in
24.5.1.3 [reverse.iter.ops].

[
Copenhagen: The LWG is concerned that the proposed resolution
introduces new overloads. Experience shows that introducing
overloads is always risky, and that it would be inappropriate to
make this change without implementation experience. It may be
desirable to provide this feature in a different way.
]

[
Lillehammer: We now have implementation experience, and agree that
this solution is safe and correct.
]

The requirements in 25.3.7, p1 and 4 call for T to satisfy the
requirements of LessThanComparable ( [lessthancomparable])
and CopyConstructible (17.6.3.1 [utility.arg.requirements]).
Since the functions take and return their arguments and result by
const reference, I believe the CopyConstructible requirement
is unnecessary.

Paragraph 16 mistakenly singles out integral types for inserting
thousands_sep() characters. This conflicts with the syntax for floating
point numbers described under 22.2.3.1/2.

Proposed resolution:

Change paragraph 16 from:

For integral types, punct.thousands_sep() characters are inserted into
the sequence as determined by the value returned by punct.do_grouping()
using the method described in 22.4.3.1.2 [facet.numpunct.virtuals].

To:

For arithmetic types, punct.thousands_sep() characters are inserted into
the sequence as determined by the value returned by punct.do_grouping()
using the method described in 22.4.3.1.2 [facet.numpunct.virtuals].

[
Copenhagen: Opinions were divided about whether this is actually an
inconsistency, but at best it seems to have been unintentional. This
is only an issue for floating-point output: The standard is
unambiguous that implementations must parse thousands_sep characters
when performing floating-point. The standard is also unambiguous that
this requirement does not apply to the "C" locale.
]

[
A survey of existing practice is needed; it is believed that some
implementations do insert thousands_sep characters for floating-point
output and others fail to insert thousands_sep characters for
floating-point input even though this is unambiguously required by the
standard.
]

(revision of the further discussion)
There are a number of problems with the requires clauses for the
algorithms in 25.1 and 25.2. The requires clause of each algorithm
should describe the necessary and sufficient requirements on the inputs
to the algorithm such that the algorithm compiles and runs properly.
Many of the requires clauses fail to do this. Here is a summary of the kinds
of mistakes:

Use of EqualityComparable, which only puts requirements on a single
type, when in fact an equality operator is required between two
different types, typically either T and the iterator's value type
or between the value types of two different iterators.

Use of Assignable for T when in fact what was needed is Assignable
for the value_type of the iterator, and convertability from T to the
value_type of the iterator. Or for output iterators, the requirement
should be that T is writable to the iterator (output iterators do
not have value types).

Here is the list of algorithms that contain mistakes:

25.1.2 std::find

25.1.6 std::count

25.1.8 std::equal

25.1.9 std::search, std::search_n

25.2.4 std::replace, std::replace_copy

25.2.5 std::fill

25.2.7 std::remove, std::remove_copy

Also, in the requirements for EqualityComparable, the requirement that
the operator be defined for const objects is lacking.

Proposed resolution:

20.1.1 Change p1 from

In Table 28, T is a type to be supplied by a C++ program
instantiating a template, a, b, and c are
values of type T.

to

In Table 28, T is a type to be supplied by a C++ program
instantiating a template, a, b, and c are
values of type const T.

25 Between p8 and p9

Add the following sentence:

When the description of an algorithm gives an expression such as
*first == value for a condition, it is required that the expression
evaluate to either true or false in boolean contexts.

-1- Requires: The expression value must be is writable to
the output iterator. The type Size is convertible to an
integral type (4.7.12.3).

25.2.7 Change p1 from

-1- Requires: Type T is EqualityComparable (20.1.1).

to

-1- Requires: The value type of the iterator must be
Assignable (23.1).

Rationale:

The general idea of the proposed solution is to remove the faulty
requires clauses and let the returns and effects clauses speak for
themselves. That is, the returns clauses contain expressions that must
be valid, and therefore already imply the correct requirements. In
addition, a sentence is added at the beginning of chapter 25 saying
that expressions given as conditions must evaluate to true or false in
a boolean context. An alternative would be to say that the type of
these condition expressions must be literally bool, but that would be
imposing a greater restriction that what the standard currently says
(which is convertible to bool).

284. unportable example in 20.3.7, p6

The example in 20.8.5 [comparisons], p6 shows how to use the C
library function strcmp() with the function pointer adapter
ptr_fun(). But since it's unspecified whether the C library
functions have extern "C" or extern
"C++" linkage [17.6.2.3 [using.linkage]], and since
function pointers with different the language linkage specifications
(7.5 [dcl.link]) are incompatible, whether this example is
well-formed is unspecified.

[Copenhagen: Minor change in the proposed resolution. Since this
issue deals in part with C and C++ linkage, it was believed to be too
confusing for the strings in the example to be "C" and "C++".
]

[Redmond: More minor changes. Got rid of the footnote (which
seems to make a sweeping normative requirement, even though footnotes
aren't normative), and changed the sentence after the footnote so that
it corresponds to the new code fragment.]

The standard library contains four algorithms that compute set
operations on sorted ranges: set_union, set_intersection,
set_difference, and set_symmetric_difference. Each
of these algorithms takes two sorted ranges as inputs, and writes the
output of the appropriate set operation to an output range. The elements
in the output range are sorted.

The ordinary mathematical definitions are generalized so that they
apply to ranges containing multiple copies of a given element. Two
elements are considered to be "the same" if, according to an
ordering relation provided by the user, neither one is less than the
other. So, for example, if one input range contains five copies of an
element and another contains three, the output range of set_union
will contain five copies, the output range of
set_intersection will contain three, the output range of
set_difference will contain two, and the output range of
set_symmetric_difference will contain two.

Because two elements can be "the same" for the purposes
of these set algorithms, without being identical in other respects
(consider, for example, strings under case-insensitive comparison),
this raises a number of unanswered questions:

If we're copying an element that's present in both of the
input ranges, which one do we copy it from?

If there are n copies of an element in the relevant
input range, and the output range will contain fewer copies (say
m) which ones do we choose? The first m, or the last
m, or something else?

Are these operations stable? That is, does a run of equivalent
elements appear in the output range in the same order as as it
appeared in the input range(s)?

The standard should either answer these questions, or explicitly
say that the answers are unspecified. I prefer the former option,
since, as far as I know, all existing implementations behave the
same way.

Proposed resolution:

Add the following to the end of 25.4.5.2 [set.union] paragraph 5:

If [first1, last1) contains m elements that are equivalent to
each other and [first2, last2) contains n elements that are
equivalent to them, then max(m, n) of these elements
will be copied to the output range: all m of these elements
from [first1, last1), and the last max(n-m, 0) of them from
[first2, last2), in that order.

Add the following to the end of 25.4.5.3 [set.intersection] paragraph 5:

If [first1, last1) contains m elements that are equivalent to each
other and [first2, last2) contains n elements that are
equivalent to them, the first min(m, n) of those
elements from [first1, last1) are copied to the output range.

Add a new paragraph, Notes, after 25.4.5.4 [set.difference]
paragraph 4:

If [first1, last1) contains m elements that are equivalent to each
other and [first2, last2) contains n elements that are
equivalent to them, the last max(m-n, 0) elements from
[first1, last1) are copied to the output range.

Add a new paragraph, Notes, after 25.4.5.5 [set.symmetric.difference]
paragraph 4:

If [first1, last1) contains m elements that are equivalent to
each other and [first2, last2) contains n elements that are
equivalent to them, then |m - n| of those elements will be
copied to the output range: the last m - n of these elements
from [first1, last1) if m > n, and the last n -
m of these elements from [first2, last2) if m < n.

[Santa Cruz: it's believed that this language is clearer than
what's in the Standard. However, it's also believed that the
Standard may already make these guarantees (although not quite in
these words). Bill and Howard will check and see whether they think
that some or all of these changes may be redundant. If so, we may
close this issue as NAD.]

Rationale:

For simple cases, these descriptions are equivalent to what's
already in the Standard. For more complicated cases, they describe
the behavior of existing implementations.

The Effects clause of the member function copyfmt() in
27.4.4.2, p15 doesn't consider the case where the left-hand side
argument is identical to the argument on the right-hand side, that is
(this == &rhs). If the two arguments are identical there
is no need to copy any of the data members or call any callbacks
registered with register_callback(). Also, as Howard Hinnant
points out in message c++std-lib-8149 it appears to be incorrect to
allow the object to fire erase_event followed by
copyfmt_event since the callback handling the latter event
may inadvertently attempt to access memory freed by the former.

Proposed resolution:

Change the Effects clause in 27.4.4.2, p15 from

-15- Effects:Assigns to the member objects of *this
the corresponding member objects of rhs, except that...

to

-15- Effects:If (this == &rhs) does nothing. Otherwise
assigns to the member objects of *this the corresponding member
objects of rhs, except that...

294. User defined macros and standard headers

Paragraph 2 of 17.6.4.3.1 [macro.names] reads: "A
translation unit that includes a header shall not contain any macros
that define names declared in that header." As I read this, it
would mean that the following program is legal:

#define npos 3.14
#include <sstream>

since npos is not defined in <sstream>. It is, however, defined
in <string>, and it is hard to imagine an implementation in
which <sstream> didn't include <string>.

I think that this phrase was probably formulated before it was
decided that a standard header may freely include other standard
headers. The phrase would be perfectly appropriate for C, for
example. In light of 17.6.5.2 [res.on.headers] paragraph 1, however,
it isn't stringent enough.

Proposed resolution:

For 17.6.4.3.1 [macro.names], replace the current wording, which reads:

Each name defined as a macro in a header is reserved to the
implementation for any use if the translation unit includes
the header.168)

A translation unit that includes a header shall not contain any
macros that define names declared or defined in that header. Nor shall
such a translation unit define macros for names lexically
identical to keywords.

168) It is not permissible to remove a library macro definition by
using the #undef directive.

with the wording:

A translation unit that includes a standard library header shall not
#define or #undef names declared in any standard library header.

A translation unit shall not #define or #undef names lexically
identical to keywords.

Table 80 lists the contents of the <cmath> header. It does not
list abs(). However, 26.5, paragraph 6, which lists added
signatures present in <cmath>, does say that several overloads
of abs() should be defined in <cmath>.

[Copenhagen: Modified proposed resolution so that it also gets
rid of that vestigial list of functions in paragraph 1.]

Rationale:

All this DR does is fix a typo; it's uncontroversial. A
separate question is whether we're doing the right thing in
putting some overloads in <cmath> that we aren't also
putting in <cstdlib>. That's issue 323.

296. Missing descriptions and requirements of pair operators

The synopsis of the header <utility> in 20.2 [utility]
lists the complete set of equality and relational operators for pair
but the section describing the template and the operators only describes
operator==() and operator<(), and it fails to mention
any requirements on the template arguments. The remaining operators are
not mentioned at all.

[
2009-09-27 Alisdair reopens.
]

The issue is a lack of wording specifying the semantics of std::pair
relational operators. The rationale is that this is covered by
catch-all wording in the relops component, and that as relops directly
precedes pair in the document this is an easy connection to make.

Reading the current working paper I make two observations:

relops no longer immediately precedes pair in the order of
specification. However, even if it did, there is a lot of pair
specification itself between the (apparently) unrelated relops and the
relational operators for pair. (The catch-all still requires
operator== and operator< to be specified
explicitly)

No other library component relies on the catch-all clause. The following
all explicitly document all six relational operators, usually in a
manner that could have deferred to the relops clause.

The container components provide their own (equivalent) definition in
23.2.1 [container.requirements.general] Table 90 -- Container
requirements and do so do not defer to relops.

Shared_ptr explicitly documents operator!= and does
not supply the other 3 missing operators
(>,>=,<=) so does not meet the
reqirements of the relops clause.

Weak_ptr only supports operator< so would not be
covered by relops.

At the very least I would request a note pointing to the relops clause
we rely on to provide this definition. If this route is taken, I would
recommend reducing many of the above listed clauses to a similar note
rather than providing redundant specification.

My preference would be to supply the 4 missing specifications consistent
with the rest of the library.

[
2009-10-11 Daniel opens 1233 which deals with the same issue as
it pertains to unique_ptr.
]

20.2.1 [operators] paragraph 10 already specifies the semantics.
That paragraph says that, if declarations of operator!=, operator>,
operator<=, and operator>= appear without definitions, they are
defined as specified in 20.2.1 [operators]. There should be no user
confusion, since that paragraph happens to immediately precede the
specification of pair.

297. const_mem_fun_t<>::argument_type should be const T*

The class templates const_mem_fun_t in 20.5.8, p8 and
const_mem_fun1_t
in 20.5.8, p9 derive from unary_function<T*, S>, and
binary_function<T*,
A, S>, respectively. Consequently, their argument_type, and
first_argument_type
members, respectively, are both defined to be T* (non-const).
However, their function call member operator takes a const T*
argument. It is my opinion that argument_type should be const
T* instead, so that one can easily refer to it in generic code. The
example below derived from existing code fails to compile due to the
discrepancy:

#1 foo() takes a plain unqualified X* as an argument
#2 the type of the pointer is incompatible with the type of the member
function
#3 the address of a constant being passed to a function taking a non-const
X*

Proposed resolution:

Replace the top portion of the definition of the class template
const_mem_fun_t in 20.5.8, p8

298. ::operator delete[] requirement incorrect/insufficient

The default behavior of operator delete[] described in 18.5.1.2, p12 -
namely that for non-null value of ptr, the operator reclaims storage
allocated by the earlier call to the default operator new[] - is not
correct in all cases. Since the specified operator new[] default
behavior is to call operator new (18.5.1.2, p4, p8), which can be
replaced, along with operator delete, by the user, to implement their
own memory management, the specified default behavior of operator
delete[] must be to call operator delete.

Proposed resolution:

Change 18.5.1.2, p12 from

-12-Default behavior:

For a null value of ptr , does nothing.

Any other value of ptr shall be a value returned
earlier by a call to the default operator new[](std::size_t).
[Footnote: The value must not have been invalidated by an intervening
call to operator delete[](void*) (17.6.4.9 [res.on.arguments]).
--- end footnote]
For such a non-null value of ptr , reclaims storage
allocated by the earlier call to the default operator new[].

300. list::merge() specification incomplete

The "Effects" clause for list::merge() (23.3.5.5 [list.ops], p23)
appears to be incomplete: it doesn't cover the case where the argument
list is identical to *this (i.e., this == &x). The requirement in the
note in p24 (below) is that x be empty after the merge which is surely
unintended in this case.

Proposed resolution:

In 23.3.5.5 [list.ops], replace paragraps 23-25 with:

23 Effects: if (&x == this) does nothing; otherwise, merges the two
sorted ranges [begin(), end()) and [x.begin(), x.end()). The result
is a range in which the elements will be sorted in non-decreasing
order according to the ordering defined by comp; that is, for every
iterator i in the range other than the first, the condition comp(*i,
*(i - 1)) will be false.

24 Notes: Stable: if (&x != this), then for equivalent elements in the
two original ranges, the elements from the original range [begin(),
end()) always precede the elements from the original range [x.begin(),
x.end()). If (&x != this) the range [x.begin(), x.end()) is empty
after the merge.

25 Complexity: At most size() + x.size() - 1 applications of comp if
(&x ! = this); otherwise, no applications of comp are performed. If
an exception is thrown other than by a comparison there are no
effects.

[Copenhagen: The original proposed resolution did not fix all of
the problems in 23.3.5.5 [list.ops], p22-25. Three different
paragraphs (23, 24, 25) describe the effects of merge.
Changing p23, without changing the other two, appears to introduce
contradictions. Additionally, "merges the argument list into the
list" is excessively vague.]

303. Bitset input operator underspecified

In 23.3.5.3, we are told that bitset's input operator
"Extracts up to N (single-byte) characters from
is.", where is is a stream of type
basic_istream<charT, traits>.

The standard does not say what it means to extract single byte
characters from a stream whose character type, charT, is in
general not a single-byte character type. Existing implementations
differ.

A reasonable solution will probably involve widen() and/or
narrow(), since they are the supplied mechanism for
converting a single character between char and
arbitrary charT.

Narrowing the input characters is not the same as widening the
literals '0' and '1', because there may be some
locales in which more than one wide character maps to the narrow
character '0'. Narrowing means that alternate
representations may be used for bitset input, widening means that
they may not be.

Note that for numeric input, num_get<>
(22.2.2.1.2/8) compares input characters to widened version of narrow
character literals.

From Pete Becker, in c++std-lib-8224:

Different writing systems can have different representations for the
digits that represent 0 and 1. For example, in the Unicode representation
of the Devanagari script (used in many of the Indic languages) the digit 0
is 0x0966, and the digit 1 is 0x0967. Calling narrow would translate those
into '0' and '1'. But Unicode also provides the ASCII values 0x0030 and
0x0031 for for the Latin representations of '0' and '1', as well as code
points for the same numeric values in several other scripts (Tamil has no
character for 0, but does have the digits 1-9), and any of these values
would also be narrowed to '0' and '1'.

...

It's fairly common to intermix both native and Latin
representations of numbers in a document. So I think the rule has to be
that if a wide character represents a digit whose value is 0 then the bit
should be cleared; if it represents a digit whose value is 1 then the bit
should be set; otherwise throw an exception. So in a Devanagari locale,
both 0x0966 and 0x0030 would clear the bit, and both 0x0967 and 0x0031
would set it. Widen can't do that. It would pick one of those two values,
and exclude the other one.

From Jens Maurer, in c++std-lib-8233:

Whatever we decide, I would find it most surprising if
bitset conversion worked differently from int conversion
with regard to alternate local representations of
numbers.

Thus, I think the options are:

Have a new defect issue for 22.2.2.1.2/8 so that it will
require the use of narrow().

Have a defect issue for bitset() which describes clearly
that widen() is to be used.

Proposed resolution:

Replace the first two sentences of paragraph 5 with:

Extracts up to N characters from is. Stores these
characters in a temporary object str of type
basic_string<charT, traits>, then evaluates the
expression x = bitset<N>(str).

Replace the third bullet item in paragraph 5 with:

the next input character is neither is.widen(0)
nor is.widen(1) (in which case the input character
is not extracted).

Rationale:

Input for bitset should work the same way as numeric
input. Using widen does mean that alternative digit
representations will not be recognized, but this was a known
consequence of the design choice.

codecvt<wchar_t,char,mbstate_t> converts between the native
character sets for tiny and wide characters. Instantiations on
mbstate_t perform conversion between encodings known to the library
implementor.

The semantics of do_in and do_length are linked. What one does must
be consistent with what the other does. 22.2.1.5/3 leads me to
believe that the vendor is allowed to choose the algorithm that
codecvt<wchar_t,char,mbstate_t>::do_in performs so that it makes
his customers happy on a given platform. But 22.2.1.5.2/10 explicitly
says what codecvt<wchar_t,char,mbstate_t>::do_length must
return. And thus indirectly specifies the algorithm that
codecvt<wchar_t,char,mbstate_t>::do_in must perform. I believe
that this is not what was intended and is a defect.

Discussion from the -lib reflector:
This proposal would have the effect of making the semantics of
all of the virtual functions in codecvt<wchar_t, char,
mbstate_t> implementation specified. Is that what we want, or
do we want to mandate specific behavior for the base class virtuals
and leave the implementation specified behavior for the codecvt_byname
derived class? The tradeoff is that former allows implementors to
write a base class that actually does something useful, while the
latter gives users a way to get known and specified---albeit
useless---behavior, and is consistent with the way the standard
handles other facets. It is not clear what the original intention
was.

Nathan has suggest a compromise: a character that is a widened version
of the characters in the basic execution character set must be
converted to a one-byte sequence, but there is no such requirement
for characters that are not part of the basic execution character set.

Proposed resolution:

Change 22.2.1.5.2/5 from:

The instantiations required in Table 51 (lib.locale.category), namely
codecvt<wchar_t,char,mbstate_t> and
codecvt<char,char,mbstate_t>, store no characters. Stores no more
than (to_limit-to) destination elements. It always leaves the to_next
pointer pointing one beyond the last element successfully stored.

to:

Stores no more than (to_limit-to) destination elements, and leaves the
to_next pointer pointing one beyond the last element successfully
stored. codecvt<char,char,mbstate_t> stores no characters.

Change 22.2.1.5.2/10 from:

-10- Returns: (from_next-from) where from_next is the largest value in
the range [from,from_end] such that the sequence of values in the
range [from,from_next) represents max or fewer valid complete
characters of type internT. The instantiations required in Table 51
(21.1.1.1.1), namely codecvt<wchar_t, char, mbstate_t> and
codecvt<char, char, mbstate_t>, return the lesser of max and
(from_end-from).

to:

-10- Returns: (from_next-from) where from_next is the largest value in
the range [from,from_end] such that the sequence of values in the range
[from,from_next) represents max or fewer valid complete characters of
type internT. The instantiation codecvt<char, char, mbstate_t> returns
the lesser of max and (from_end-from).

[Redmond: Nathan suggested an alternative resolution: same as
above, but require that, in the default encoding, a character from the
basic execution character set would map to a single external
character. The straw poll was 8-1 in favor of the proposed
resolution.]

Rationale:

The default encoding should be whatever users of a given platform
would expect to be the most natural. This varies from platform to
platform. In many cases there is a preexisting C library, and users
would expect the default encoding to be whatever C uses in the default
"C" locale. We could impose a guarantee like the one Nathan suggested
(a character from the basic execution character set must map to a
single external character), but this would rule out important
encodings that are in common use: it would rule out JIS, for
example, and it would rule out a fixed-width encoding of UCS-4.

18.1, paragraph 5, reads: "The macro offsetof
accepts a restricted set of type arguments in this
International Standard. type shall be a POD structure or a POD
union (clause 9). The result of applying the offsetof macro to a field
that is a static data member or a function member is
undefined."

For the POD requirement, it doesn't say "no diagnostic
required" or "undefined behavior". I read 1.4 [intro.compliance], paragraph 1, to mean that a diagnostic is required.
It's not clear whether this requirement was intended. While it's
possible to provide such a diagnostic, the extra complication doesn't
seem to add any value.

Proposed resolution:

Change 18.1, paragraph 5, to "If type is not a POD
structure or a POD union the results are undefined."

[Copenhagen: straw poll was 7-4 in favor. It was generally
agreed that requiring a diagnostic was inadvertent, but some LWG
members thought that diagnostics should be required whenever
possible.]

The standard is currently inconsistent in 23.3.5.3 [list.capacity]
paragraph 1 and 23.3.5.4 [list.modifiers] paragraph 1.
23.2.3.3/1, for example, says:

-1- Any sequence supporting operations back(), push_back() and pop_back()
can be used to instantiate stack. In particular, vector (lib.vector), list
(lib.list) and deque (lib.deque) can be used.

But this is false: vector<bool> can not be used, because the
container adaptors return a T& rather than using the underlying
container's reference type.

This is a contradiction that can be fixed by:

Modifying these paragraphs to say that vector<bool>
is an exception.

Removing the vector<bool> specialization.

Changing the return types of stack and priority_queue to use
reference typedef's.

I propose 3. This does not preclude option 2 if we choose to do it
later (see issue 96); the issues are independent. Option
3 offers a small step towards support for proxied containers. This
small step fixes a current contradiction, is easy for vendors to
implement, is already implemented in at least one popular lib, and
does not break any code.

308. Table 82 mentions unrelated headers

Table 82 in section 27 mentions the header <cstdlib> for String
streams (27.8 [string.streams]) and the headers <cstdio> and
<cwchar> for File streams (27.9 [file.streams]). It's not clear
why these headers are mentioned in this context since they do not
define any of the library entities described by the
subclauses. According to 17.6.1.1 [contents], only such headers
are to be listed in the summary.

Proposed resolution:

Remove <cstdlib> and <cwchar> from
Table 82.

[Copenhagen: changed the proposed resolution slightly. The
original proposed resolution also said to remove <cstdio> from
Table 82. However, <cstdio> is mentioned several times within
section 27.9 [file.streams], including 27.9.2 [c.files].]

The C standard says in 7.1.4 that it is unspecified whether errno is a
macro or an identifier with external linkage. In some implementations
it can be either, depending on compile-time options. (E.g., on
Solaris in multi-threading mode, errno is a macro that expands to a
function call, but is an extern int otherwise. "Unspecified" allows
such variability.)

The C++ standard:

17.4.1.2 says in a note that errno must be macro in C. (false)

17.4.3.1.3 footnote 166 says errno is reserved as an external
name (true), and implies that it is an identifier.

19.3 simply lists errno as a macro (by what reasoning?) and goes
on to say that the contents of of C++ <errno.h> are the
same as in C, begging the question.

C.2, table 95 lists errno as a macro, without comment.

I find no other references to errno.

We should either explicitly say that errno must be a macro, even
though it need not be a macro in C, or else explicitly leave it
unspecified. We also need to say something about namespace std.
A user who includes <cerrno> needs to know whether to write
errno, or ::errno, or std::errno, or
else <cerrno> is useless.

Two acceptable fixes:

errno must be a macro. This is trivially satisfied by adding
#define errno (::std::errno)
to the headers if errno is not already a macro. You then always
write errno without any scope qualification, and it always expands
to a correct reference. Since it is always a macro, you know to
avoid using errno as a local identifer.

errno is in the global namespace. This fix is inferior, because
::errno is not guaranteed to be well-formed.

[
This issue was first raised in 1999, but it slipped through
the cracks.
]

Proposed resolution:

Change the Note in section 17.4.1.2p5 from

Note: the names defined as macros in C include the following:
assert, errno, offsetof, setjmp, va_arg, va_end, and va_start.

to

Note: the names defined as macros in C include the following:
assert, offsetof, setjmp, va_arg, va_end, and va_start.

In section 19.3, change paragraph 2 from

The contents are the same as the Standard C library header
<errno.h>.

to

The contents are the same as the Standard C library header
<errno.h>, except that errno shall be defined as a macro.

Rationale:

C++ must not leave it up to the implementation to decide whether or
not a name is a macro; it must explicitly specify exactly which names
are required to be macros. The only one that really works is for it
to be a macro.

In the synopsis in 27.7.3.1 [ostream], remove the
// partial specializationss comment. Also remove the same
comment (correctly spelled, but still incorrect) from the synopsis in
27.7.3.6.4 [ostream.inserters.character].

inserts t if and only if there is no element in the container with key
equivalent to the key of t. The bool component of the returned pair
indicates whether the insertion takes place and the iterator component of the
pair points to the element with key equivalent to the key of t.

The description should be more specific about exactly how the bool component
indicates whether the insertion takes place.

Proposed resolution:

Change the text in question to

...The bool component of the returned pair is true if and only if the insertion
takes place...

317. Instantiation vs. specialization of facets

The localization section of the standard refers to specializations of
the facet templates as instantiations even though the required facets
are typically specialized rather than explicitly (or implicitly)
instantiated. In the case of ctype<char> and
ctype_byname<char> (and the wchar_t versions), these facets are
actually required to be specialized. The terminology should be
corrected to make it clear that the standard doesn't mandate explicit
instantiation (the term specialization encompasses both explicit
instantiations and specializations).

Proposed resolution:

In the following paragraphs, replace all occurrences of the word
instantiation or instantiations with specialization or specializations,
respectively:

An implementation is required to provide those instantiations
for facet templates identified as members of a category, and
for those shown in Table 52:

to

An implementation is required to provide those specializations...

[Nathan will review these changes, and will look for places where
explicit specialization is necessary.]

Rationale:

This is a simple matter of outdated language. The language to
describe templates was clarified during the standardization process,
but the wording in clause 22 was never updated to reflect that
change.

320. list::assign overspecified

Section 23.3.5.2 [list.cons], paragraphs 6-8 specify that list assign (both forms) have
the "effects" of a call to erase followed by a call to insert.

I would like to document that implementers have the freedom to implement
assign by other methods, as long as the end result is the same and the
exception guarantee is as good or better than the basic guarantee.

The motivation for this is to use T's assignment operator to recycle
existing nodes in the list instead of erasing them and reallocating
them with new values. It is also worth noting that, with careful
coding, most common cases of assign (everything but assignment with
true input iterators) can elevate the exception safety to strong if
T's assignment has a nothrow guarantee (with no extra memory cost).
Metrowerks does this. However I do not propose that this subtlety be
standardized. It is a QoI issue.

Effects: Replaces the contents of the list with the range [first, last).

In 23.2.3 [sequence.reqmts], in Table 67 (sequence requirements),
add two new rows:

a.assign(i,j) void pre: i,j are not iterators into a.
Replaces elements in a with a copy
of [i, j).
a.assign(n,t) void pre: t is not a reference into a.
Replaces elements in a with n copies
of t.

Change 23.3.5.2 [list.cons]/8 from:

Effects:

erase(begin(), end());
insert(begin(), n, t);

to:

Effects: Replaces the contents of the list with n copies of t.

[Redmond: Proposed resolution was changed slightly. Previous
version made explicit statement about exception safety, which wasn't
consistent with the way exception safety is expressed elsewhere.
Also, the change in the sequence requirements is new. Without that
change, the proposed resolution would have required that assignment of
a subrange would have to work. That too would have been
overspecification; it would effectively mandate that assignment use a
temporary. Howard provided wording.
]

[Curaçao: Made editorial improvement in wording; changed
"Replaces elements in a with copies of elements in [i, j)."
with "Replaces the elements of a with a copy of [i, j)."
Changes not deemed serious enough to requre rereview.]

Section 22.2.2.1.2 at p7 states that "A length specifier is added to
the conversion function, if needed, as indicated in Table 56."
However, Table 56 uses the term "length modifier", not "length
specifier".

Proposed resolution:

In 22.2.2.1.2 at p7, change the text "A length specifier is added ..."
to be "A length modifier is added ..."

It's widely assumed that, if X is a container,
iterator_traits<X::iterator>::value_type and
iterator_traits<X::const_iterator>::value_type should both be
X::value_type. However, this is nowhere stated. The language in
Table 65 is not precise about the iterators' value types (it predates
iterator_traits), and could even be interpreted as saying that
iterator_traits<X::const_iterator>::value_type should be "const
X::value_type".

This belongs as a container requirement, rather than an iterator
requirement, because the whole notion of iterator/const_iterator
pairs is specific to containers' iterator.

It is existing practice that (for example)
iterator_traits<list<int>::const_iterator>::value_type
is "int", rather than "const int". This is consistent with
the way that const pointers are handled: the standard already
requires that iterator_traits<const int*>::value_type is int.

In the following sections, a and b denote values of X, n denotes a
value of the difference type Distance, u, tmp, and m denote
identifiers, r denotes a value of X&, t denotes a value of
value type T.

Two other parts of the standard that are relevant to whether
output iterators have value types:

24.1/1 says "All iterators i support the expression *i,
resulting in a value of some class, enumeration, or built-in type
T, called the value type of the iterator".

24.3.1/1, which says "In the case of an output iterator, the types
iterator_traits<Iterator>::difference_type
iterator_traits<Iterator>::value_type are both defined as void."

The first of these passages suggests that "*i" is supposed to
return a useful value, which contradicts the note in 24.1.2/2 saying
that the only valid use of "*i" for output iterators is in an
expression of the form "*i = t". The second of these passages appears
to contradict Table 73, because it suggests that "*i"'s return value
should be void. The second passage is also broken in the case of a an
iterator type, like non-const pointers, that satisfies both the output
iterator requirements and the forward iterator requirements.

What should the standard say about *i's return value when
i is an output iterator, and what should it say about that t is in the
expression "*i = t"? Finally, should the standard say anything about
output iterators' pointer and reference types?

Proposed resolution:

24.1 p1, change

All iterators i support the expression *i, resulting
in a value of some class, enumeration, or built-in type T,
called the value type of the iterator.

to

All input iterators i support the expression *i,
resulting in a value of some class, enumeration, or built-in type
T, called the value type of the iterator. All output
iterators support the expression *i = o where o is a
value of some type that is in the set of types that are writable to
the particular iterator type of i.

24.1 p9, add

o denotes a value of some type that is writable to the
output iterator.

Table 73, change

*a = t

to

*r = o

and change

*r++ = t

to

*r++ = o

[post-Redmond: Jeremy provided wording]

Rationale:

The LWG considered two options: change all of the language that
seems to imply that output iterators have value types, thus making it
clear that output iterators have no value types, or else define value
types for output iterator consistently. The LWG chose the former
option, because it seems clear that output iterators were never
intended to have value types. This was a deliberate design decision,
and any language suggesting otherwise is simply a mistake.

A future revision of the standard may wish to revisit this design
decision.

The Returns clause in 22.2.6.3.2, p3 says about
moneypunct<charT>::do_grouping()

Returns: A pattern defined identically as the result of
numpunct<charT>::do_grouping().241)

Footnote 241 then reads

This is most commonly the value "\003" (not "3").

The returns clause seems to imply that the two member functions must
return an identical value which in reality may or may not be true,
since the facets are usually implemented in terms of struct std::lconv
and return the value of the grouping and mon_grouping, respectively.
The footnote also implies that the member function of the moneypunct
facet (rather than the overridden virtual functions in moneypunct_byname)
most commonly return "\003", which contradicts the C standard which
specifies the value of "" for the (most common) C locale.

Proposed resolution:

Replace the text in Returns clause in 22.2.6.3.2, p3 with the following:

Returns: A pattern defined identically as, but not necessarily
equal to, the result of numpunct<charT>::do_grouping().241)

and replace the text in Footnote 241 with the following:

To specify grouping by 3s the value is "\003", not "3".

Rationale:

The fundamental problem is that the description of the locale facet
virtuals serves two purposes: describing the behavior of the base
class, and describing the meaning of and constraints on the behavior
in arbitrary derived classes. The new wording makes that separation a
little bit clearer. The footnote (which is nonnormative) is not
supposed to say what the grouping is in the "C" locale or in any other
locale. It is just a reminder that the values are interpreted as small
integers, not ASCII characters.

The wchar_t versions of time_get and
time_get_byname are listed incorrectly in table 52,
required instantiations. In both cases the second template
parameter is given as OutputIterator. It should instead be
InputIterator, since these are input facets.

328. Bad sprintf format modifier in money_put<>::do_put()

The sprintf format string , "%.01f" (that's the digit one), in the
description of the do_put() member functions of the money_put facet in
22.2.6.2.2, p1 is incorrect. First, the f format specifier is wrong
for values of type long double, and second, the precision of 01
doesn't seem to make sense. What was most likely intended was
"%.0Lf"., that is a precision of zero followed by the L length
modifier.

There is an apparent contradiction about which circumstances can cause
a reallocation of a vector in Section 23.3.6.3 [vector.capacity] and
section 23.3.6.5 [vector.modifiers].

23.3.6.3 [vector.capacity],p5 says:

Notes: Reallocation invalidates all the references, pointers, and iterators
referring to the elements in the sequence. It is guaranteed that no
reallocation takes place during insertions that happen after a call to
reserve() until the time when an insertion would make the size of the vector
greater than the size specified in the most recent call to reserve().

(capacity) Returns: The total number of elements the vector
can hold without requiring reallocation

...After reserve(), capacity() is greater or equal to the
argument of reserve if reallocation happens; and equal to the previous value
of capacity() otherwise...

This implies that vec.capacity() is still 23, and so the insert()
should not require a reallocation, as vec.size() is 0. This is backed
up by 23.3.6.5 [vector.modifiers], p1:

(insert) Notes: Causes reallocation if the new size is greater than the old
capacity.

Though this doesn't rule out reallocation if the new size is less
than the old capacity, I think the intent is clear.

Proposed resolution:

Change the wording of 23.3.6.3 [vector.capacity] paragraph 5 to:

Notes: Reallocation invalidates all the references, pointers, and
iterators referring to the elements in the sequence. It is guaranteed
that no reallocation takes place during insertions that happen after a
call to reserve() until the time when an insertion would make the size
of the vector greater than the value of capacity().

[Redmond: original proposed resolution was modified slightly. In
the original, the guarantee was that there would be no reallocation
until the size would be greater than the value of capacity() after the
most recent call to reserve(). The LWG did not believe that the
"after the most recent call to reserve()" added any useful
information.]

Rationale:

There was general agreement that, when reserve() is called twice in
succession and the argument to the second invocation is smaller than
the argument to the first, the intent was for the second invocation to
have no effect. Wording implying that such cases have an effect on
reallocation guarantees was inadvertant.

331. bad declaration of destructor for ios_base::failure

With the change in 17.6.5.12 [res.on.exception.handling] to state
"An implementation may strengthen the exception-specification for a
non-virtual function by removing listed exceptions."
(issue 119)
and the following declaration of ~failure() in ios_base::failure

[Footnote: The effect of executing cout << endl is to insert a
newline character in the output sequence controlled by cout, then
synchronize it with any external file with which it might be
associated. --- end foonote]

Does the term "file" here refer to the external device?
This leads to some implementation ambiguity on systems with fully
buffered files where a newline does not cause a flush to the device.

Choosing to sync with the device leads to significant performance
penalties for each call to endl, while not sync-ing leads to
errors under special circumstances.

I could not find any other statement that explicitly defined
the behavior one way or the other.

Proposed resolution:

Remove footnote 300 from section 27.7.3.8 [ostream.manip].

Rationale:

We already have normative text saying what endl does: it
inserts a newline character and calls flush. This footnote
is at best redundant, at worst (as this issue says) misleading,
because it appears to make promises about what flush
does.

334. map::operator[] specification forces inefficient implementation

The current standard describes map::operator[] using a
code example. That code example is however quite
inefficient because it requires several useless copies
of both the passed key_type value and of default
constructed mapped_type instances.
My opinion is that was not meant by the comitee to
require all those temporary copies.

Currently map::operator[] behaviour is specified as:

Returns:
(*((insert(make_pair(x, T()))).first)).second.

This specification however uses make_pair that is a
template function of which parameters in this case
will be deduced being of type const key_type& and
const T&. This will create a pair<key_type,T> that
isn't the correct type expected by map::insert so
another copy will be required using the template
conversion constructor available in pair to build
the required pair<const key_type,T> instance.

If we consider calling of key_type copy constructor
and mapped_type default constructor and copy
constructor as observable behaviour (as I think we
should) then the standard is in this place requiring
two copies of a key_type element plus a default
construction and two copy construction of a mapped_type
(supposing the addressed element is already present
in the map; otherwise at least another copy
construction for each type).

A simple (half) solution would be replacing the description with:

Returns:
(*((insert(value_type(x, T()))).first)).second.

This will remove the wrong typed pair construction that
requires one extra copy of both key and value.

However still the using of map::insert requires temporary
objects while the operation, from a logical point of view,
doesn't require any.

I think that a better solution would be leaving free an
implementer to use a different approach than map::insert
that, because of its interface, forces default constructed
temporaries and copies in this case.
The best solution in my opinion would be just requiring
map::operator[] to return a reference to the mapped_type
part of the contained element creating a default element
with the specified key if no such an element is already
present in the container. Also a logarithmic complexity
requirement should be specified for the operation.

This would allow library implementers to write alternative
implementations not using map::insert and reaching optimal
performance in both cases of the addressed element being
present or absent from the map (no temporaries at all and
just the creation of a new pair inside the container if
the element isn't present).
Some implementer has already taken this option but I think
that the current wording of the standard rules that as
non-conforming.

Proposed resolution:

Replace 23.4.4.3 [map.access] paragraph 1 with

-1- Effects: If there is no key equivalent to x in the map, inserts
value_type(x, T()) into the map.

-2- Returns: A reference to the mapped_type corresponding to x in *this.

-3- Complexity: logarithmic.

[This is the second option mentioned above. Howard provided
wording. We may also wish to have a blanket statement somewhere in
clause 17 saying that we do not intend the semantics of sample code
fragments to be interpreted as specifing exactly how many copies are
made. See issue 98 for a similar problem.]

Rationale:

This is the second solution described above; as noted, it is
consistent with existing practice.

Note that we now need to specify the complexity explicitly, because
we are no longer defining operator[] in terms of
insert.

In section 25.3.5 [alg.replace] before p4: The name of the first
parameter of template replace_copy_if should be "InputIterator"
instead of "Iterator". According to 17.5.2.1 [type.descriptions] p1 the
parameter name conveys real normative meaning.

Proposed resolution:

Change Iterator to InputIterator.

338. is whitespace allowed between `-' and a digit?

From Stage 2 processing in 22.4.2.1.2 [facet.num.get.virtuals], p8 and 9 (the
original text or the text corrected by the proposed resolution of
issue 221) it seems clear that no whitespace is allowed
within a number, but 22.4.3.1 [locale.numpunct], p2, which gives the
format for integer and floating point values, says that whitespace is
optional between a plusminus and a sign.

The text needs to be clarified to either consistently allow or
disallow whitespace between a plusminus and a sign. It might be
worthwhile to consider the fact that the C library stdio facility does
not permit whitespace embedded in numbers and neither does the C or
C++ core language (the syntax of integer-literals is given in 2.14.2 [lex.icon], that of floating-point-literals in 2.14.4 [lex.fcon] of the C++ standard).

Proposed resolution:

Change the first part of 22.4.3.1 [locale.numpunct] paragraph 2 from:

The syntax for number formats is as follows, where digit
represents the radix set specified by the fmtflags argument
value, whitespace is as determined by the facet
ctype<charT> (22.2.1.1), and thousands-sep and
decimal-point are the results of corresponding
numpunct<charT> members. Integer values have the
format:

The syntax for number formats is as follows, where digit
represents the radix set specified by the fmtflags argument
value, and thousands-sep and decimal-point are the
results of corresponding numpunct<charT> members.
Integer values have the format:

It's not clear whether the format described in 22.4.3.1 [locale.numpunct] paragraph 2 has any normative weight: nothing in the
standard says how, or whether, it's used. However, there's no reason
for it to differ gratuitously from the very specific description of
numeric processing in 22.4.2.1.2 [facet.num.get.virtuals]. The proposed
resolution removes all mention of "whitespace" from that format.

The ctype_category::mask type is declared to be an enum in 22.4.1 [category.ctype] with p1 then stating that it is a bitmask type, most
likely referring to the definition of bitmask type in 17.5.2.1.3 [bitmask.types], p1. However, the said definition only applies to
clause 27, making the reference in 22.2.1 somewhat dubious.

Proposed resolution:

Clarify 17.3.2.1.2, p1 by changing the current text from

Several types defined in clause 27 are bitmask types. Each bitmask type
can be implemented as an enumerated type that overloads certain operators,
as an integer type, or as a bitset (20.5 [template.bitset]).

to read

Several types defined in clauses lib.language.support through
lib.input.output and Annex D are bitmask types. Each bitmask type can
be implemented as an enumerated type that overloads certain operators,
as an integer type, or as a bitset (lib.template.bitset).

Additionally, change the definition in 22.2.1 to adopt the same
convention as in clause 27 by replacing the existing text with the
following (note, in particluar, the cross-reference to 17.3.2.1.2 in
22.2.1, p1):

340. interpretation of has_facet<Facet>(loc)

It's unclear whether 22.1.1.1.1, p3 says that
has_facet<Facet>(loc) returns true for any Facet
from Table 51 or whether it includes Table 52 as well:

For any locale loc either constructed, or returned by
locale::classic(), and any facet Facet that is a member of a
standard category, has_facet<Facet>(loc) is true. Each
locale member function which takes a locale::category
argument operates on the corresponding set of facets.

It seems that it comes down to which facets are considered to be members of a
standard category. Intuitively, I would classify all the facets in Table 52 as
members of their respective standard categories, but there are an unbounded set
of them...

The paragraph implies that, for instance, has_facet<num_put<C,
OutputIterator> >(loc) must always return true. I don't think that's
possible. If it were, then use_facet<num_put<C, OutputIterator>
>(loc) would have to return a reference to a distinct object for each
valid specialization of num_put<C, OutputIteratory>, which is
clearly impossible.

On the other hand, if none of the facets in Table 52 is a member of a standard
category then none of the locale member functions that operate on entire
categories of facets will work properly.

It seems that what p3 should mention that it's required (permitted?)
to hold only for specializations of Facet from Table 52 on
C from the set { char, wchar_t }, and
InputIterator and OutputIterator from the set of
{
{i,o}streambuf_iterator<{char,wchar_t}>
}.

Proposed resolution:

In 22.3.1.1.1 [locale.category], paragraph 3, change
"that is a member of a standard category" to "shown in Table 51".

Rationale:

The facets in Table 52 are an unbounded set. Locales should not be
required to contain an infinite number of facets.

It's not necessary to talk about which values of InputIterator and
OutputIterator must be supported. Table 51 already contains a
complete list of the ones we need.

However, the wording of 23.3.6.3 [vector.capacity]paragraph 5 prevents
the capacity of a vector being reduced, following a call to
reserve(). This invalidates the idiom, as swap() is thus prevented
from reducing the capacity. The proposed wording for issue 329 does not affect this. Consequently, the example above
requires the temporary to be expanded to cater for the contents of
vec, and the contents be copied across. This is a linear-time
operation.

However, the container requirements state that swap must have constant
complexity (23.2 [container.requirements] note to table 65).

This is an important issue, as reallocation affects the validity of
references and iterators.

If the wording of 23.2.4.2p5 is taken to be the desired intent, then
references and iterators remain valid after a call to swap, if they refer to
an element before the new end() of the vector into which they originally
pointed, in which case they refer to the element at the same index position.
Iterators and references that referred to an element whose index position
was beyond the new end of the vector are invalidated.

If the note to table 65 is taken as the desired intent, then there are two
possibilities with regard to iterators and references:

All Iterators and references into both vectors are invalidated.

Iterators and references into either vector remain valid, and remain
pointing to the same element. Consequently iterators and references that
referred to one vector now refer to the other, and vice-versa.

Proposed resolution:

Add a new paragraph after 23.3.6.3 [vector.capacity] paragraph 5:

void swap(vector<T,Allocator>& x);

Effects: Exchanges the contents and capacity() of *this
with that of x.

Complexity: Constant time.

[This solves the problem reported for this issue. We may also
have a problem with a circular definition of swap() for other
containers.]

Rationale:

swap should be constant time. The clear intent is that it should just
do pointer twiddling, and that it should exchange all properties of
the two vectors, including their reallocation guarantees.

343. Unspecified library header dependencies

The synopses of the C++ library headers clearly show which names are
required to be defined in each header. Since in order to implement the
classes and templates defined in these headers declarations of other
templates (but not necessarily their definitions) are typically
necessary the standard in 17.4.4, p1 permits library implementers to
include any headers needed to implement the definitions in each header.

For instance, although it is not explicitly specified in the synopsis of
<string>, at the point of definition of the std::basic_string template
the declaration of the std::allocator template must be in scope. All
current implementations simply include <memory> from within <string>,
either directly or indirectly, to bring the declaration of
std::allocator into scope.

Additionally, however, some implementation also include <istream> and
<ostream> at the top of <string> to bring the declarations of
std::basic_istream and std::basic_ostream into scope (which are needed
in order to implement the string inserter and extractor operators
(21.3.7.9 [lib.string.io])). Other implementations only include
<iosfwd>, since strictly speaking, only the declarations and not the
full definitions are necessary.

Obviously, it is possible to implement <string> without actually
providing the full definitions of all the templates std::basic_string
uses (std::allocator, std::basic_istream, and std::basic_ostream).
Furthermore, not only is it possible, doing so is likely to have a
positive effect on compile-time efficiency.

But while it may seem perfectly reasonable to expect a program that uses
the std::basic_string insertion and extraction operators to also
explicitly include <istream> or <ostream>, respectively, it doesn't seem
reasonable to also expect it to explicitly include <memory>. Since
what's reasonable and what isn't is highly subjective one would expect
the standard to specify what can and what cannot be assumed.
Unfortunately, that isn't the case.

There are many more examples that demonstrate this lack of a
requirement. I believe that in a good number of cases it would be
unreasonable to require that a program explicitly include all the
headers necessary for a particular template to be specialized, but I
think that there are cases such as some of those above where it would
be desirable to allow implementations to include only as much as
necessary and not more.

[
post Bellevue:
]

Position taken in prior reviews is that the idea of a table of header
dependencies is a good one. Our view is that a full paper is needed to
do justice to this, and we've made that recommendation to the issue
author.

For every C++ library header, supply a minimum set of other C++ library
headers that are required to be included by that header. The proposed
list is below (C++ headers for C Library Facilities, table 12 in
17.4.1.2, p3, are omitted):

The portability problem is real. A program that works correctly on
one implementation might fail on another, because of different header
dependencies. This problem was understood before the standard was
completed, and it was a conscious design choice.

One possible way to deal with this, as a library extension, would
be an <all> header.

Hinnant: It's time we dealt with this issue for C++0X. Reopened.

345. type tm in <cwchar>

C99, and presumably amendment 1 to C90, specify that <wchar.h>
declares struct tm as an incomplete type. However, table 48 in 21.7 [c.strings] does not mention the type tm as being declared in
<cwchar>. Is this omission intentional or accidental?

Proposed resolution:

In section 21.7 [c.strings], add "tm" to table 48.

346. Some iterator member functions should be const

Iterator member functions and operators that do not change the state
of the iterator should be defined as const member functions or as
functions that take iterators either by const reference or by
value. The standard does not explicitly state which functions should
be const. Since this a fairly common mistake, the following changes
are suggested to make this explicit.

The tables almost indicate constness properly through naming: r
for non-const and a,b for const iterators. The following changes
make this more explicit and also fix a couple problems.

Proposed resolution:

In X [iterator.concepts] Change the first section of p9 from
"In the following sections, a and b denote values of X..." to
"In the following sections, a and b denote values of type const X...".

In Table 73, change

a->m U& ...

to

a->m const U& ...
r->m U& ...

In Table 73 expression column, change

*a = t

to

*r = t

[Redmond: The container requirements should be reviewed to see if
the same problem appears there.]

In 22.3.1.1.1 [locale.category] paragraph 1, the category members
are described as bitmask elements. In fact, the bitmask requirements
in 17.5.2.1.3 [bitmask.types] don't seem quite right: none
and all are bitmask constants, not bitmask elements.

In particular, the requirements for none interact poorly
with the requirement that the LC_* constants from the C library must
be recognizable as C++ locale category constants. LC_* values should
not be mixed with these values to make category values.

We have two options for the proposed resolution. Informally:
option 1 removes the requirement that LC_* values be recognized as
category arguments. Option 2 changes the category type so that this
requirement is implementable, by allowing none to be some
value such as 0x1000 instead of 0.

Valid category values include the locale member bitmask
elements collate, ctype, monetary,
numeric, time, and messages, each of which
represents a single locale category. In addition, locale member
bitmask constant none is defined as zero and represents no
category. And locale member bitmask constant all is defined such that
the expression

(collate | ctype | monetary | numeric | time | messages | all) == all

is true, and represents the union of all categories. Further
the expression (X | Y), where X and Y each
represent a single category, represents the union of the two
categories.

locale member functions expecting a category
argument require one of the category values defined above, or
the union of two or more such values. Such a category
argument identifies a set of locale categories. Each locale category,
in turn, identifies a set of locale facets, including at least those
shown in Table 51:

[Curaçao: need input from locale experts.]

Rationale:

The LWG considered, and rejected, an alternate proposal (described
as "Option 2" in the discussion). The main reason for rejecting it
was that library implementors were concerened about implementation
difficult, given that getting a C++ library to work smoothly with a
separately written C library is already a delicate business. Some
library implementers were also concerned about the issue of adding
extra locale categories.

Option 2:
Replace the first paragraph of 22.3.1.1 [locale.types] with:

Valid category values include the enumerated values. In addition, the
result of applying commutative operators | and & to any two valid
values is valid, and results in the setwise union and intersection,
respectively, of the argument categories. The values all and
none are defined such that for any valid value cat, the
expressions (cat | all == all), (cat & all == cat),
(cat | none == cat) and (cat & none == none) are
true. For non-equal values cat1 and cat2 of the
remaining enumerated values, (cat1 & cat2 == none) is true.
For any valid categories cat1 and cat2, the result
of (cat1 & ~cat2) is valid, and equals the setwise union of
those categories found in cat1 but not found in cat2.
[Footnote: it is not required that all equal the setwise union
of the other enumerated values; implementations may add extra categories.]

(1)
There are no requirements on the stateT template parameter of
fpos listed in 27.4.3. The interface appears to require that
the type be at least Assignable and CopyConstructible (27.4.3.1, p1),
and I think also DefaultConstructible (to implement the operations in
Table 88).

21.1.2, p3, however, only requires that
char_traits<charT>::state_type meet the requirements of
CopyConstructible types.

(2)
Additionally, the stateT template argument has no
corresponding typedef in fpos which might make it difficult to use in
generic code.

The LWG feels this is two issues, as indicated above. The first is
a defect---std::basic_fstream is unimplementable without these
additional requirements---and the proposed resolution fixes it. The
second is questionable; who would use that typedef? The class
template fpos is used only in a very few places, all of which know the
state type already. Unless motivation is provided, the second should
be considered NAD.

353. std::pair missing template assignment

The class template std::pair defines a template ctor (20.2.2, p4) but
no template assignment operator. This may lead to inefficient code since
assigning an object of pair<C, D> to pair<A, B>
where the types C and D are distinct from but convertible to
A and B, respectively, results in a call to the template copy
ctor to construct an unnamed temporary of type pair<A, B>
followed by an ordinary (perhaps implicitly defined) assignment operator,
instead of just a straight assignment.

Proposed resolution:

Add the following declaration to the definition of std::pair:

template<class U, class V>
pair& operator=(const pair<U, V> &p);

And also add a paragraph describing the effects of the function template to the
end of 20.2.2:

template<class U, class V>
pair& operator=(const pair<U, V> &p);

Effects: first = p.first;second = p.second;Returns: *this

[Curaçao: There is no indication this is was anything other than
a design decision, and thus NAD. May be appropriate for a future
standard.]

[
Pre Bellevue: It was recognized that this was taken care of by
N1856,
and thus moved from NAD Future to NAD EditorialResolved.
]

Discussions in the thread "Associative container lower/upper bound
requirements" on comp.std.c++ suggests that there is a defect in the
C++ standard, Table 69 of section 23.1.2, "Associative containers",
[lib.associative.reqmts]. It currently says:

a.find(k): returns an iterator pointing to an element with the key equivalent to
k, or a.end() if such an element is not found.

a.lower_bound(k): returns an iterator pointing to the first element with
key not less than k.

a.upper_bound(k): returns an iterator pointing to the first element with
key greater than k.

We have "or a.end() if such an element is not found" for
find, but not for upper_bound or
lower_bound. As the text stands, one would be forced to
insert a new element into the container and return an iterator to that
in case the sought iterator does not exist, which does not seem to be
the intention (and not possible with the "const" versions).

Table 68 "Optional Sequence Operations" in 23.1.1/12
specifies operational semantics for "a.back()" as
"*--a.end()", which may be ill-formed [because calling
operator-- on a temporary (the return) of a built-in type is
ill-formed], provided a.end() returns a simple pointer rvalue
(this is almost always the case for std::vector::end(), for
example). Thus, the specification is not only incorrect, it
demonstrates a dangerous construct: "--a.end()" may
successfully compile and run as intended, but after changing the type
of the container or the mode of compilation it may produce
compile-time error.

Proposed resolution:

Change the specification in table 68 "Optional Sequence
Operations" in 23.1.1/12 for "a.back()" from

[There is a second possible defect; table 68 "Optional
Sequence Operations" in the "Operational Semantics"
column uses operations present only in the "Reversible
Container" requirements, yet there is no stated dependency
between these separate requirements tables. Ask in Santa Cruz if the
LWG would like a new issue opened.]

[Santa Cruz: the proposed resolution is even worse than what's in
the current standard: erase is undefined for reverse iterator. If
we're going to make the change, we need to define a temporary and
use operator--. Additionally, we don't know how prevalent this is:
do we need to make this change in more than one place? Martin has
volunteered to review the standard and see if this problem occurs
elsewhere.]

[Oxford: Matt provided new wording to address the concerns raised
in Santa Cruz. It does not appear that this problem appears
anywhere else in clauses 23 or 24.]

[Kona: In definition of operational semantics of back(), change
"*tmp" to "return *tmp;"]

I don't think thousands_sep is being treated correctly after
decimal_point has been seen. Since grouping applies only to the
integral part of the number, the first such occurrence should, IMO,
terminate Stage 2. (If it does not terminate it, then 22.2.2.1.2, p12
and 22.2.3.1.2, p3 need to explain how thousands_sep is to be
interpreted in the fractional part of a number.)

The easiest change I can think of that resolves this issue would be
something like below.

Proposed resolution:

Change the first sentence of 22.2.2.1.2, p9 from

If discard is true then the position of the character is
remembered, but the character is otherwise ignored. If it is not
discarded, then a check is made to determine if c is allowed as
the next character of an input field of the conversion specifier
returned by stage 1. If so it is accumulated.

to

If discard is true, then if '.' has not yet been
accumulated, then the position of the character is remembered, but
the character is otherwise ignored. Otherwise, if '.' has
already been accumulated, the character is discarded and Stage 2
terminates. ...

Rationale:

We believe this reflects the intent of the Standard. Thousands sep
characters after the decimal point are not useful in any locale.
Some formatting conventions do group digits that follow the decimal
point, but they usually introduce a different grouping character
instead of reusing the thousand sep character. If we want to add
support for such conventions, we need to do so explicitly.

360. locale mandates inefficient implementation

22.1.1, p7 (copied below) allows iostream formatters and extractors
to make assumptions about the values returned from facet members.
However, such assumptions are apparently not guaranteed to hold
in other cases (e.g., when the facet members are being called directly
rather than as a result of iostream calls, or between successive
calls to the same iostream functions with no interevening calls to
imbue(), or even when the facet member functions are called
from other member functions of other facets). This restriction
prevents locale from being implemented efficiently.

Proposed resolution:

Change the first sentence in 22.1.1, p7 from

In successive calls to a locale facet member function during
a call to an iostream inserter or extractor or a streambuf member
function, the returned result shall be identical. [Note: This
implies that such results may safely be reused without calling
the locale facet member function again, and that member functions
of iostream classes cannot safely call imbue()
themselves, except as specified elsewhere. --end note]

to

In successive calls to a locale facet member function on a facet
object installed in the same locale, the returned result shall be
identical. ...

Rationale:

This change is reasonable becuase it clarifies the intent of this
part of the standard.

362. bind1st/bind2nd type safety

The definition of bind1st() (D.11 [depr.lib.binders]) can result in
the construction of an unsafe binding between incompatible pointer
types. For example, given a function whose first parameter type is
'pointer to T', it's possible without error to bind an argument of
type 'pointer to U' when U does not derive from T:

The definition of bind1st() includes a functional-style conversion to
map its argument to the expected argument type of the bound function
(see below):

typename Operation::first_argument_type(x)

A functional-style conversion (D.11 [depr.lib.binders]) is defined to be
semantically equivalent to an explicit cast expression (D.11 [depr.lib.binders]), which may (according to 5.4, paragraph 5) be interpreted
as a reinterpret_cast, thus masking the error.

The problem and proposed change also apply to D.11 [depr.lib.binders].

Proposed resolution:

Add this sentence to the end of D.11 [depr.lib.binders]/1:
"Binders bind1st and bind2nd are deprecated in
favor of std::tr1::bind."

(Notes to editor: (1) when and if tr1::bind is incorporated into
the standard, "std::tr1::bind" should be changed to "std::bind". (2)
20.5.6 should probably be moved to Annex D.

Rationale:

There is no point in fixing bind1st and bind2nd. tr1::bind is a
superior solution. It solves this problem and others.

365. Lack of const-qualification in clause 27

Some stream and streambuf member functions are declared non-const,
even thought they appear only to report information rather than to
change an object's logical state. They should be declared const. See
document N1360 for details and rationale.

The list of member functions under discussion: in_avail,
showmanyc, tellg, tellp, is_open.

Of the changes proposed in N1360, the only one that is safe is
changing the filestreams' is_open to const. The LWG believed that
this was NAD the first time it considered this issue (issue 73), but now thinks otherwise. The corresponding streambuf
member function, after all,is already const.

The other proposed changes are less safe, because some streambuf
functions that appear merely to report a value do actually perform
mutating operations. It's not even clear that they should be
considered "logically const", because streambuf has two interfaces, a
public one and a protected one. These functions may, and often do,
change the state as exposed by the protected interface, even if the
state exposed by the public interface is unchanged.

Note that implementers can make this change in a binary compatible
way by providing both overloads; this would be a conforming extension.

369. io stream objects and static ctors

Is it safe to use standard iostream objects from constructors of
static objects? Are standard iostream objects constructed and are
their associations established at that time?

Surpisingly enough, Standard does NOT require that.

27.3/2 [lib.iostream.objects] guarantees that standard iostream
objects are constructed and their associations are established before
the body of main() begins execution. It also refers to ios_base::Init
class as the panacea for constructors of static objects.

However, there's nothing in 27.3 [lib.iostream.objects],
in 27.4.2 [lib.ios.base], and in 27.4.2.1.6 [lib.ios::Init],
that would require implementations to allow access to standard
iostream objects from constructors of static objects.

Details:

Core text refers to some magic object ios_base::Init, which will
be discussed below:

"The [standard iostream] objects are constructed, and their
associations are established at some time prior to or during
first time an object of class basic_ios<charT,traits>::Init
is constructed, and in any case before the body of main
begins execution." (27.3/2 [lib.iostream.objects])

However, the second non-normative footnote makes an explicit
and unsupported claim:

"Constructors and destructors for static objects can access these
[standard iostream] objects to read input from stdin or write output
to stdout or stderr." (27.3/2 footnote 265 [lib.iostream.objects])

The only bit of magic is related to that ios_base::Init class. AFAIK,
the rationale behind ios_base::Init was to bring an instance of this
class to each translation unit which #included <iostream> or
related header. Such an inclusion would support the claim of footnote
quoted above, because in order to use some standard iostream object it
is necessary to #include <iostream>.

However, while Standard explicitly describes ios_base::Init as
an appropriate class for doing the trick, I failed to found a
mention of an _instance_ of ios_base::Init in Standard.

Proposed resolution:

Add to 27.4 [iostream.objects], p2, immediately before the last sentence
of the paragraph, the following two sentences:

If a translation unit includes <iostream>, or explicitly
constructs an ios_base::Init object, these stream objects shall
be constructed before dynamic initialization of non-local
objects defined later in that translation unit, and these stream
objects shall be destroyed after the destruction of dynamically
initialized non-local objects defined later in that translation unit.

[Lillehammer: Matt provided wording.]

[Mont Tremblant: Matt provided revised wording.]

Rationale:

The original proposed resolution unconditionally required
implementations to define an ios_base::Init object of some
implementation-defined name in the header <iostream>. That's an
overspecification. First, defining the object may be unnecessary
and even detrimental to performance if an implementation can
guarantee that the 8 standard iostream objects will be initialized
before any other user-defined object in a program. Second, there
is no need to require implementations to document the name of the
object.

The new proposed resolution gives users guidance on what they need to
do to ensure that stream objects are constructed during startup.

The requirements for multiset and multimap containers (23.1
[lib.containers.requirements], 23.1.2 [lib.associative.reqmnts],
23.3.2 [lib.multimap] and 23.3.4 [lib.multiset]) make no mention of
the stability of the required (mutating) member functions. It appears
the standard allows these functions to reorder equivalent elements of
the container at will, yet the pervasive red-black tree implementation
appears to provide stable behaviour.

This is of most concern when considering the behaviour of erase().
A stability requirement would guarantee the correct working of the
following 'idiom' that removes elements based on a certain predicate
function.

Although clause 23.1.2/8 guarantees that i remains a valid iterator
througout this loop, absence of the stability requirement could
potentially result in elements being skipped. This would make
this code incorrect, and, furthermore, means that there is no way
of erasing these elements without iterating first over the entire
container, and second over the elements to be erased. This would
be unfortunate, and have a negative impact on both performance and
code simplicity.

If the stability requirement is intended, it should be made explicit
(probably through an extra paragraph in clause 23.1.2).

If it turns out stability cannot be guaranteed, i'd argue that a
remark or footnote is called for (also somewhere in clause 23.1.2) to
warn against relying on stable behaviour (as demonstrated by the code
above). If most implementations will display stable behaviour, any
problems emerging on an implementation without stable behaviour will
be hard to track down by users. This would also make the need for an
erase_if() member function that much greater.

Add the following to the end of 23.2.4 [associative.reqmts] paragraph 4:
"For multiset and multimap, insertand erase
are stable: they preserve the relative ordering of equivalent
elements.

[Lillehammer: Matt provided wording]

[Joe Gottman points out that the provided wording does not address
multimap and multiset. N1780 also addresses this issue and suggests
wording.]

[Mont Tremblant: Changed set and map to multiset and multimap.]

Rationale:

The LWG agrees that this guarantee is necessary for common user
idioms to work, and that all existing implementations provide this
property. Note that this resolution guarantees stability for
multimap and multiset, not for all associative containers in
general.

373. Are basic_istream and basic_ostream to use (exceptions()&badbit) != 0 ?

In 27.7.2.2.1 [istream.formatted.reqmts] and 27.7.3.6.1 [ostream.formatted.reqmts]
(exception()&badbit) != 0 is used in testing for rethrow, yet
exception() is the constructor to class std::exception in 18.7.1 [type.info] that has no return type. Should member function
exceptions() found in 27.5.5 [ios] be used instead?

In Section 27.8.2.4 [stringbuf.virtuals], Table 90, the implication is that
the four conditions should be mutually exclusive, but they are not.
The first two cases, as written, are subcases of the third.

As written, it is unclear what should be the result if cases 1 and 2
are both true, but case 3 is false.

Proposed resolution:

Rewrite these conditions as:

(which & (ios_base::in|ios_base::out)) == ios_base::in

(which & (ios_base::in|ios_base::out)) == ios_base::out

(which & (ios_base::in|ios_base::out)) ==
(ios_base::in|ios_base::out)
and way == either ios_base::beg or ios_base::end

Otherwise

Rationale:

It's clear what we wanted to say, we just failed to say it. This
fixes it.

The last sentence in 22.2.1.1.2, p11 below doesn't seem to make sense.

charT do_widen (char c) const;
-11- Effects: Applies the simplest reasonable transformation from
a char value or sequence of char values to the corresponding
charT value or values. The only characters for which unique
transformations are required are those in the basic source
character set (2.2). For any named ctype category with a
ctype<charT> facet ctw and valid ctype_base::mask value
M (is(M, c) || !ctw.is(M, do_widen(c))) is true.

Shouldn't the last sentence instead read

For any named ctype category with a ctype<char> facet ctc
and valid ctype_base::mask value M
(ctc.is(M, c) || !is(M, do_widen(c))) is true.

I.e., if the narrow character c is not a member of a class of
characters then neither is the widened form of c. (To paraphrase
footnote 224.)

Proposed resolution:

Replace the last sentence of 22.4.1.1.2 [locale.ctype.virtuals], p11 with the
following text:

For any named ctype category with a ctype<char> facet ctc
and valid ctype_base::mask value M,
(ctc.is(M, c) || !is(M, do_widen(c))) is true.

[Kona: Minor edit. Added a comma after the M for clarity.]

Rationale:

The LWG believes this is just a typo, and that this is the correct fix.

Tables 53 and 54 in 22.4.1.5 [locale.codecvt.byname] are both titled "convert
result values," when surely "do_in/do_out result values" must have
been intended for Table 53 and "do_unshift result values" for Table
54.

Table 54, row 3 says that the meaning of partial is "more characters
needed to be supplied to complete termination." The function is not
supplied any characters, it is given a buffer which it fills with
characters or, more precisely, destination elements (i.e., an escape
sequence). So partial means that space for more than (to_limit - to)
destination elements was needed to terminate a sequence given the
value of state.

Proposed resolution:

Change the title of Table 53 to "do_in/do_out result values" and
the title of Table 54 to "do_unshift result values."

Change the text in Table 54, row 3 (the partial row), under the
heading Meaning, to "space for more than (to_limit - to) destination
elements was needed to terminate a sequence given the value of state."

All but one codecvt member functions that take a state_type argument
list as one of their preconditions that the state_type argument have
a valid value. However, according to 22.2.1.5.2, p6,
codecvt::do_unshift() is the only codecvt member that is supposed to
return error if the state_type object is invalid.

It seems to me that the treatment of state_type by all codecvt member
functions should be the same and the current requirements should be
changed. Since the detection of invalid state_type values may be
difficult in general or computationally expensive in some specific
cases, I propose the following:

Proposed resolution:

Add a new paragraph before 22.2.1.5.2, p5, and after the function
declaration below

Requires: (to <= to_end) well defined and true; state initialized,
if at the beginning of a sequence, or else equal to the result of
converting the preceding characters in the sequence.

and change the text in Table 54, row 4, the error row, under
the heading Meaning, from

state has invalid value

to

an unspecified error has occurred

Rationale:

The intent is that implementations should not be required to detect
invalid state values; such a requirement appears nowhere else. An
invalid state value is a precondition violation, i.e. undefined
behavior. Implementations that do choose to detect invalid state
values, or that choose to detect any other kind of error, may return
error as an indication.

Following a discussion on the boost list regarding end iterators and
the possibility of performing operator--() on them, it seems to me
that there is a typo in the standard. This typo has nothing to do
with that discussion.

I have checked this newsgroup, as well as attempted a search of the
Active/Defect/Closed Issues List on the site for the words "s is
derefer" so I believe this has not been proposed before. Furthermore,
the "Lists by Index" mentions only DR 299 on section
24.1.4, and DR 299 is not related to this issue.

The standard makes the following assertion on bidirectional iterators,
in section 24.1.4 [lib.bidirectional.iterators], Table 75:

In particular, "s is dereferenceable" seems to be in error. It seems
that the intention was to say "r is dereferenceable".

If it were to say "r is dereferenceable" it would
make perfect sense. Since s must be dereferenceable prior to
operator++, then the natural result of operator-- (to undo operator++)
would be to make r dereferenceable. Furthermore, without other
assertions, and basing only on precondition and postconditions, we
could not otherwise know this. So it is also interesting information.

384. equal_range has unimplementable runtime complexity

Section 25.4.3.3 [equal.range]
states that at most 2 * log(last - first) + 1
comparisons are allowed for equal_range.

It is not possible to implement equal_range with these constraints.

In a range of one element as in:

int x = 1;
equal_range(&x, &x + 1, 1)

it is easy to see that at least 2 comparison operations are needed.

For this case at most 2 * log(1) + 1 = 1 comparison is allowed.

I have checked a few libraries and they all use the same (nonconforming)
algorithm for equal_range that has a complexity of

2* log(distance(first, last)) + 2.

I guess this is the algorithm that the standard assumes for equal_range.

It is easy to see that 2 * log(distance) + 2 comparisons are enough
since equal range can be implemented with lower_bound and upper_bound
(both log(distance) + 1).

I think it is better to require something like 2log(distance) + O(1) (or
even logarithmic as multiset::equal_range).
Then an implementation has more room to optimize for certain cases (e.g.
have log(distance) characteristics when at most match is found in the range
but 2log(distance) + 4 for the worst case).

The LWG considered just saying O(log n) for all three, but
decided that threw away too much valuable information. The fact
that lower_bound is twice as fast as equal_range is important.
However, it's better to allow an arbitrary additive constant than to
specify an exact count. An exact count would have to
involve floor or ceil. It would be too easy to
get this wrong, and don't provide any substantial value for users.

386. Reverse iterator's operator[] has impossible return type

In 24.5.1.3.11 [reverse.iter.op-=], reverse_iterator<>::operator[]
is specified as having a return type of reverse_iterator::reference,
which is the same as iterator_traits<Iterator>::reference.
(Where Iterator is the underlying iterator type.)

The trouble is that Iterator's own operator[] doesn't
necessarily have a return type
of iterator_traits<Iterator>::reference. Its
return type is merely required to be convertible
to Iterator's value type. The return type specified for
reverse_iterator's operator[] would thus appear to be impossible.

With the resolution of issue 299, the type of
a[n] will continue to be required (for random access
iterators) to be convertible to the value type, and also a[n] =
t will be a valid expression. Implementations of
reverse_iterator will likely need to return a proxy from
operator[] to meet these requirements. As mentioned in the
comment from Dave Abrahams, the simplest way to specify that
reverse_iterator meet this requirement to just mandate
it and leave the return type of operator[] unspecified.

[
Comments from Dave Abrahams: IMO we should resolve 386 by just saying
that the return type of reverse_iterator's operator[] is
unspecified, allowing the random access iterator requirements to
impose an appropriate return type. If we accept 299's proposed
resolution (and I think we should), the return type will be
readable and writable, which is about as good as we can do.
]

387. std::complex over-encapsulated

The absence of explicit description of std::complex<T> layout
makes it imposible to reuse existing software developed in traditional
languages like Fortran or C with unambigous and commonly accepted
layout assumptions. There ought to be a way for practitioners to
predict with confidence the layout of std::complex<T> whenever T
is a numerical datatype. The absence of ways to access individual
parts of a std::complex<T> object as lvalues unduly promotes
severe pessimizations. For example, the only way to change,
independently, the real and imaginary parts is to write something like

complex<T> z;
// ...
// set the real part to r
z = complex<T>(r, z.imag());
// ...
// set the imaginary part to i
z = complex<T>(z.real(), i);

At this point, it seems appropriate to recall that a complex number
is, in effect, just a pair of numbers with no particular invariant to
maintain. Existing practice in numerical computations has it that a
complex number datatype is usually represented by Cartesian
coordinates. Therefore the over-encapsulation put in the specification
of std::complex<> is not justified.

Proposed resolution:

Add the following requirements to 26.4 [complex.numbers] as 26.3/4:

If z is an lvalue expression of type cvstd::complex<T> then

the expression reinterpret_cast<cv T(&)[2]>(z)
is well-formed; and

reinterpret_cast<cv T(&)[2]>(z)[0] designates the
real part of z; and

reinterpret_cast<cv T(&)[2]>(z)[1] designates the
imaginary part of z.

Moreover, if a is an expression of pointer type cvcomplex<T>*
and the expression a[i] is well-defined for an integer expression
i then:

reinterpret_cast<cv T*>(a)[2*i] designates the real
part of a[i]; and

reinterpret_cast<cv T*>(a)[2*i+1] designates the
imaginary part of a[i].

In 26.4.2 [complex] and 26.4.3 [complex.special] add the following member functions
(changing T to concrete types as appropriate for the specializations).

void real(T);
void imag(T);

Add to 26.4.4 [complex.members]

T real() const;

Returns: the value of the real component

void real(T val);

Assigns val to the real component.

T imag() const;

Returns: the value of the imaginary component

void imag(T val);

Assigns val to the imaginary component.

[Kona: The layout guarantee is absolutely necessary for C
compatibility. However, there was disagreement about the other part
of this proposal: retrieving elements of the complex number as
lvalues. An alternative: continue to have real() and imag() return
rvalues, but add set_real() and set_imag(). Straw poll: return
lvalues - 2, add setter functions - 5. Related issue: do we want
reinterpret_cast as the interface for converting a complex to an
array of two reals, or do we want to provide a more explicit way of
doing it? Howard will try to resolve this issue for the next
meeting.]

[pre-Sydney: Howard summarized the options in n1589.]

[
Bellevue:
]

Second half of proposed wording replaced and moved to Ready.

[
Pre-Sophia Antipolis, Howard adds:
]

Added the members to 26.4.3 [complex.special] and changed from Ready to Review.

[
Post-Sophia Antipolis:
]

Moved from WP back to Ready so that the "and 26.4.3 [complex.special]" in the proposed
resolution can be officially applied.

Rationale:

The LWG believes that C99 compatibility would be enough
justification for this change even without other considerations. All
existing implementations already have the layout proposed here.

While the call numbered #1 succeeds, the call numbered #2 fails
because the const version of the member function
valarray<T>::operator[](size_t) returns a value instead of a
const-reference. That seems to be so for no apparent reason, no
benefit. Not only does that defeats users' expectation but it also
does hinder existing software (written either in C or Fortran)
integration within programs written in C++. There is no reason why
subscripting an expression of type valarray<T> that is const-qualified
should not return a const T&.

Proposed resolution:

In the class synopsis in 26.6.2 [template.valarray], and in
26.6.2.4 [valarray.access] just above paragraph 1, change

T operator[](size_t const);

to

const T& operator[](size_t const);

[Kona: fixed a minor typo: put semicolon at the end of the line
wehre it belongs.]

Rationale:

Return by value seems to serve no purpose. Valaray was explicitly
designed to have a specified layout so that it could easily be
integrated with libraries in other languages, and return by value
defeats that purpose. It is believed that this change will have no
impact on allowable optimizations.

391. non-member functions specified as const

The specifications of toupper and tolower both specify the functions as
const, althought they are not member functions, and are not specified as
const in the header file synopsis in section 22.3 [locales].

Proposed resolution:

In 22.3.3.2 [conversions], remove const from the function
declarations of std::toupper and std::tolower

Rationale:

Fixes an obvious typo

395. inconsistencies in the definitions of rand() and random_shuffle()

In 26.8 [c.math], the C++ standard refers to the C standard for the
definition of rand(); in the C standard, it is written that "The
implementation shall behave as if no library function calls the rand
function."

In 25.3.12 [alg.random.shuffle], there is no specification as to
how the two parameter version of the function generates its random
value. I believe that all current implementations in fact call rand()
(in contradiction with the requirement avove); if an implementation does
not call rand(), there is the question of how whatever random generator
it does use is seeded. Something is missing.

Proposed resolution:

In [lib.c.math], add a paragraph specifying that the C definition of
rand shal be modified to say that "Unless otherwise specified, the
implementation shall behave as if no library function calls the rand
function."

In [lib.alg.random.shuffle], add a sentence to the effect that "In
the two argument form of the function, the underlying source of
random numbers is implementation defined. [Note: in particular, an
implementation is permitted to use rand.]

Rationale:

The original proposed resolution proposed requiring the
two-argument from of random_shuffle to
use rand. We don't want to do that, because some existing
implementations already use something else: gcc
uses lrand48, for example. Using rand presents a
problem if the number of elements in the sequence is greater than
RAND_MAX.

396. what are characters zero and one

23.3.5.1, p6 [lib.bitset.cons] talks about a generic character
having the value of 0 or 1 but there is no definition of what
that means for charT other than char and wchar_t. And even for
those two types, the values 0 and 1 are not actually what is
intended -- the values '0' and '1' are. This, along with the
converse problem in the description of to_string() in 23.3.5.2,
p33, looks like a defect remotely related to DR 303.

http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/lwg-defects.html#303

23.3.5.1:
-6- An element of the constructed string has value zero if the
corresponding character in str, beginning at position pos,
is 0. Otherwise, the element has the value one.

23.3.5.2:
-33- Effects: Constructs a string object of the appropriate
type and initializes it to a string of length N characters.
Each character is determined by the value of its
corresponding bit position in *this. Character position N
?- 1 corresponds to bit position zero. Subsequent decreasing
character positions correspond to increasing bit positions.
Bit value zero becomes the character 0, bit value one becomes
the character 1.

Also note the typo in 23.3.5.1, p6: the object under construction
is a bitset, not a string.

[
Sophia Antipolis:
]

We note that bitset has been moved from section 23 to section 20, by
another issue (842) previously resolved at this meeting.

Disposition: move to ready.

We request that Howard submit a separate issue regarding the three to_string overloads.

Change the first two sentences of 20.5.1 [bitset.cons] p6 to: "An
element of the constructed string has value 0 if the corresponding
character in str, beginning at position pos,
is zero. Otherwise, the element has the value 1.

Change the text of the second sentence in 23.3.5.1, p5 to read:
"The function then throws invalid_argument if any of the rlen
characters in str beginning at position pos is other than zero
or one. The function uses traits::eq() to compare the character
values."

Change the declaration of the to_string member function
immediately before 20.5.2 [bitset.members] p33 to:

There is a real problem here: we need the character values of '0'
and '1', and we have no way to get them since strings don't have
imbued locales. In principle the "right" solution would be to
provide an extra object, either a ctype facet or a full locale,
which would be used to widen '0' and '1'. However, there was some
discomfort about using such a heavyweight mechanism. The proposed
resolution allows those users who care about this issue to get it
right.

We fix the inserter to use the new arguments. Note that we already
fixed the analogous problem with the extractor in issue 303.

[
post Bellevue:
]

We are happy with the resolution as proposed, and we move this to Ready.

For that two type casts ("(void*)p" and "(T*)p") to be well-formed
this would require then conversions to T* and void* for all
alloc<T>::pointer, so it would implicitely introduce extra
requirements for alloc<T>::pointer, additionally to the only
current requirement (being a random access iterator).

Note: Actually I would prefer to replace "((T*)p)?->dtor_name" with
"p?->dtor_name", but AFAICS this is not possible cause of an omission
in 13.5.6 [over.ref] (for which I have filed another DR on 29.11.2002).

[Kona: The LWG thinks this is somewhere on the border between
Open and NAD. The intend is clear: construct constructs an
object at the location p. It's reading too much into the
description to think that literally calling new is
required. Tweaking this description is low priority until we can do
a thorough review of allocators, and, in particular, allocators with
non-default pointer types.]

[
Batavia: Proposed resolution changed to less code and more description.
]

[
post Oxford: This would be rendered NAD Editorial by acceptance of
N2257.
]

[
Kona (2007): The LWG adopted the proposed resolution of N2387 for this issue which
was subsequently split out into a separate paper N2436 for the purposes of voting.
The resolution in N2436 addresses this issue. The LWG voted to accelerate this
issue to Ready status to be voted into the WP at Kona.
]

This applies to the new expression that is contained in both par12 of
20.6.9.1 [allocator.members] and in par2 (table 32) of [default.con.req].
I think this new expression is wrong, involving unintended side
effects.

Cause of using "new" but not "::new", any existing "T::operator new"
function will hide the global placement new function. When there is no
"T::operator new" with adequate signature,
every_alloc<T>::construct(..) is ill-formed, and most
std::container<T,every_alloc<T>> use it; a workaround
would be adding placement new and delete functions with adequate
signature and semantic to class T, but class T might come from another
party. Maybe even worse is the case when T has placement new and
delete functions with adequate signature but with "unknown" semantic:
I dont like to speculate about it, but whoever implements
any_container<T,any_alloc> and wants to use construct(..)
probably must think about it.

Proposed resolution:

Replace "new" with "::new" in both cases.

403. basic_string::swap should not throw exceptions

std::basic_string, 21.4 [basic.string] paragraph 2 says that
basic_string "conforms to the requirements of a Sequence, as specified
in (23.1.1)." The sequence requirements specified in (23.1.1) to not
include any prohibition on swap members throwing exceptions.

Section 23.2 [container.requirements] paragraph 10 does limit conditions under
which exceptions may be thrown, but applies only to "all container
types defined in this clause" and so excludes basic_string::swap
because it is defined elsewhere.

Eric Niebler points out that 21.4 [basic.string] paragraph 5 explicitly
permits basic_string::swap to invalidates iterators, which is
disallowed by 23.2 [container.requirements] paragraph 10. Thus the standard would
be contradictory if it were read or extended to read as having
basic_string meet 23.2 [container.requirements] paragraph 10 requirements.

Yet several LWG members have expressed the belief that the original
intent was that basic_string::swap should not throw exceptions as
specified by 23.2 [container.requirements] paragraph 10, and that the standard is
unclear on this issue. The complexity of basic_string::swap is
specified as "constant time", indicating the intent was to avoid
copying (which could cause a bad_alloc or other exception). An
important use of swap is to ensure that exceptions are not thrown in
exception-safe code.

Note: There remains long standing concern over whether or not it is
possible to reasonably meet the 23.2 [container.requirements] paragraph 10 swap
requirements when allocators are unequal. The specification of
basic_string::swap exception requirements is in no way intended to
address, prejudice, or otherwise impact that concern.

Proposed resolution:

In 21.4.6.8 [string::swap], add a throws clause:

Throws: Shall not throw exceptions.

404. May a replacement allocation function be declared inline?

The eight basic dynamic memory allocation functions (single-object
and array versions of ::operator new and ::operator delete, in the
ordinary and nothrow forms) are replaceable. A C++ program may
provide an alternative definition for any of them, which will be used
in preference to the implementation's definition.

Three different parts of the standard mention requirements on
replacement functions: 17.6.4.6 [replacement.functions], 18.6.1.1 [new.delete.single]
and 18.6.1.2 [new.delete.array], and 3.7.3 [basic.stc.auto].

None of these three places say whether a replacement function may
be declared inline. 18.6.1.1 [new.delete.single] paragraph 2 specifies a
signature for the replacement function, but that's not enough:
the inline specifier is not part of a function's signature.
One might also reason from 7.1.2 [dcl.fct.spec] paragraph 2, which
requires that "an inline function shall be defined in every
translation unit in which it is used," but this may not be quite
specific enough either. We should either explicitly allow or
explicitly forbid inline replacement memory allocation
functions.

Proposed resolution:

Add a new sentence to the end of 17.6.4.6 [replacement.functions] paragraph 3:
"The program's definitions shall not be specified as inline.
No diagnostic is required."

[Kona: added "no diagnostic is required"]

Rationale:

The fact that inline isn't mentioned appears to have been
nothing more than an oversight. Existing implementations do not
permit inline functions as replacement memory allocation functions.
Providing this functionality would be difficult in some cases, and is
believed to be of limited value.

405. qsort and POD

Section 25.5 [alg.c.library] describes bsearch and qsort, from the C
standard library. Paragraph 4 does not list any restrictions on qsort,
but it should limit the base parameter to point to POD. Presumably,
qsort sorts the array by copying bytes, which requires POD.

Proposed resolution:

In 25.5 [alg.c.library] paragraph 4, just after the declarations and
before the nonnormative note, add these words: "both of which have the
same behavior as the original declaration. The behavior is undefined
unless the objects in the array pointed to by base are of POD
type."

[Something along these lines is clearly necessary. Matt
provided wording.]

There is a possible defect in the standard: the standard text was
never intended to prevent arbitrary ForwardIterators, whose operations
may throw exceptions, from being passed, and it also wasn't intended
to require a temporary buffer in the case where ForwardIterators were
passed (and I think most implementations don't use one). As is, the
standard appears to impose requirements that aren't met by any
existing implementation.

Proposed resolution:

Replace 23.3.6.5 [vector.modifiers] paragraph 1 with:

1- Notes: Causes reallocation if the new size is greater than the
old capacity. If no reallocation happens, all the iterators and
references before the insertion point remain valid. If an exception
is thrown other than by the copy constructor or assignment operator
of T or by any InputIterator operation there are no effects.

[We probably need to say something similar for deque.]

407. Can singular iterators be destroyed?

Clause X [iterator.concepts], paragraph 5, says that the only expression
that is defined for a singular iterator is "an assignment of a
non-singular value to an iterator that holds a singular value". This
means that destroying a singular iterator (e.g. letting an automatic
variable go out of scope) is technically undefined behavior. This
seems overly strict, and probably unintentional.

Proposed resolution:

Change the sentence in question to "... the only exceptions are
destroying an iterator that holds a singular value, or the assignment
of a non-singular value to an iterator that holds a singular value."

A strict reading of 27.9.1 [fstreams] shows that opening or
closing a basic_[io]fstream does not affect the error bits. This
means, for example, that if you read through a file up to EOF, and
then close the stream and reopen it at the beginning of the file,
the EOF bit in the stream's error state is still set. This is
counterintuitive.

The LWG considered this issue once before, as issue 22,
and put in a footnote to clarify that the strict reading was indeed
correct. We did that because we believed the standard was
unambiguous and consistent, and that we should not make architectural
changes in a TC. Now that we're working on a new revision of the
language, those considerations no longer apply.

[Kona: the LWG agrees this is a good idea. Post-Kona: Bill
provided wording. He suggests having open, not close, clear the error
flags.]

[Post-Sydney: Howard provided a new proposed resolution. The
old one didn't make sense because it proposed to fix this at the
level of basic_filebuf, which doesn't have access to the stream's
error state. Howard's proposed resolution fixes this at the level
of the three fstream class template instead.]

25.4.5 [alg.set.operations] paragraph 1 reads:
"The semantics of the set operations are generalized to multisets in a
standard way by defining union() to contain the maximum number of
occurrences of every element, intersection() to contain the minimum, and
so on."

This is wrong. The name of the functions are set_union() and
set_intersection(), not union() and intersection().

The Effects clause in 27.5.5.4 [iostate.flags] paragraph 5 says that the
function only throws if the respective bits are already set prior to
the function call. That's obviously not the intent. The typo ought to
be corrected and the text reworded as: "If (state &
exceptions()) == 0, returns. ..."

[Kona: the original proposed resolution wasn't quite right. We
really do mean rdstate(); the ambiguity is that the wording in the
standard doesn't make it clear whether we mean rdstate() before
setting the new state, or rdsate() after setting it. We intend the
latter, of course. Post-Kona: Martin provided wording.]

"If it inserted no characters because it caught an exception thrown
while extracting characters from sb and ..."

However, we are not extracting from sb, but extracting from the
basic_istream (*this) and inserting into sb. I can't really tell if
"extracting" or "sb" is a typo.

[
Sydney: Definitely a real issue. We are, indeed, extracting characters
from an istream and not from sb. The problem was there in the FDIS and
wasn't fixed by issue 64. Probably what was intended was
to have *this instead of sb. We're talking about the exception flag
state of a basic_istream object, and there's only one basic_istream
object in this discussion, so that would be a consistent
interpretation. (But we need to be careful: the exception policy of
this member function must be consistent with that of other
extractors.) PJP will provide wording.
]

Proposed resolution:

Change the sentence from:

If it inserted no characters because it caught an exception thrown
while extracting characters from sb and failbit is on in exceptions(),
then the caught exception is rethrown.

to:

If it inserted no characters because it caught an exception thrown
while extracting characters from *this and failbit is on in exceptions(),
then the caught exception is rethrown.

Which iterators are invalidated by v.erase(i1): i1, i2,
both, or neither?

On all existing implementations that I know of, the status of i1 and
i2 is the same: both of them will be iterators that point to some
elements of the vector (albeit not the same elements they did
before). You won't get a crash if you use them. Depending on
exactly what you mean by "invalidate", you might say that neither one
has been invalidated because they still point to something,
or you might say that both have been invalidated because in both
cases the elements they point to have been changed out from under the
iterator.

The standard doesn't say either of those things. It says that erase
invalidates all iterators and references "after the point of the
erase". This doesn't include i1, since it's at the point of the
erase instead of after it. I can't think of any sensible definition
of invalidation by which one can say that i2 is invalidated but i1
isn't.

(This issue is important if you try to reason about iterator validity
based only on the guarantees in the standard, rather than reasoning
from typical implementation techniques. Strict debugging modes,
which some programmers find useful, do not use typical implementation
techniques.)

Proposed resolution:

In 23.3.6.5 [vector.modifiers] paragraph 3, change "Invalidates all the
iterators and references after the point of the erase" to
"Invalidates iterators and references at or after the point of the
erase".

Rationale:

I believe this was essentially a typographical error, and that it
was taken for granted that erasing an element invalidates iterators
that point to it. The effects clause in question treats iterators
and references in parallel, and it would seem counterintuitive to
say that a reference to an erased value remains valid.

415. behavior of std::ws

According to 27.6.1.4, the ws() manipulator is not required to construct
the sentry object. The manipulator is also not a member function so the
text in 27.6.1, p1 through 4 that describes the exception policy for
istream member functions does not apply. That seems inconsistent with
the rest of extractors and all the other input functions (i.e., ws will
not cause a tied stream to be flushed before extraction, it doesn't check
the stream's exceptions or catch exceptions thrown during input, and it
doesn't affect the stream's gcount).

Proposed resolution:

Add to 27.7.2.4 [istream.manip], immediately before the first sentence
of paragraph 1, the following text:

Behaves as an unformatted input function (as described in
27.6.1.3, paragraph 1), except that it does not count the number
of characters extracted and does not affect the value returned by
subsequent calls to is.gcount(). After constructing a sentry
object...

[Post-Kona: Martin provided wording]

416. definitions of XXX_MIN and XXX_MAX macros in climits

Given two overloads of the function foo(), one taking an argument of type
int and the other taking a long, which one will the call foo(LONG_MAX)
resolve to? The expected answer should be foo(long), but whether that
is true depends on the #defintion of the LONG_MAX macro, specifically
its type. This issue is about the fact that the type of these macros
is not actually required to be the same as the the type each respective
limit.
Section 18.2.2 of the C++ Standard does not specify the exact types of
the XXX_MIN and XXX_MAX macros #defined in the <climits> and <limits.h>
headers such as INT_MAX and LONG_MAX and instead defers to the C standard.
Section 5.2.4.2.1, p1 of the C standard specifies that "The values [of
these constants] shall be replaced by constant expressions suitable for use
in #if preprocessing directives. Moreover, except for CHAR_BIT and MB_LEN_MAX,
the following shall be replaced by expressions that have the same type as
would an expression that is an object of the corresponding type converted
according to the integer promotions."
The "corresponding type converted according to the integer promotions" for
LONG_MAX is, according to 6.4.4.1, p5 of the C standard, the type of long
converted to the first of the following set of types that can represent it:
int, long int, long long int. So on an implementation where (sizeof(long)
== sizeof(int)) this type is actually int, while on an implementation where
(sizeof(long) > sizeof(int)) holds this type will be long.
This is not an issue in C since the type of the macro cannot be detected
by any conforming C program, but it presents a portability problem in C++
where the actual type is easily detectable by overload resolution.

[Kona: the LWG does not believe this is a defect. The C macro
definitions are what they are; we've got a better
mechanism, std::numeric_limits, that is specified more
precisely than the C limit macros. At most we should add a
nonnormative note recommending that users who care about the exact
types of limit quantities should use <limits> instead of
<climits>.]

Proposed resolution:

Change 18.3.3 [c.limits], paragraph 2:

-2- The contents are the same as the Standard C library header <limits.h>.
[Note: The types of the macros in <climits> are not guaranteed
to match the type to which they refer.--end note]

27.7.2.1.3 [istream::sentry], p2 says that istream::sentry ctor prepares for input if is.good()
is true. p4 then goes on to say that the ctor sets the sentry::ok_ member to
true if the stream state is good after any preparation. 27.7.2.2.1 [istream.formatted.reqmts], p1 then
says that a formatted input function endeavors to obtain the requested input
if the sentry's operator bool() returns true.
Given these requirements, no formatted extractor should ever set failbit if
the initial stream rdstate() == eofbit. That is contrary to the behavior of
all implementations I tested. The program below prints out

Comments from Jerry Schwarz (c++std-lib-11373):
Jerry Schwarz wrote:
I don't know where (if anywhere) it says it in the standard, but the
formatted extractors are supposed to set failbit if they don't extract
any characters. If they didn't then simple loops like
while (cin >> x);
would loop forever.
Further comments from Martin Sebor:
The question is which part of the extraction should prevent this from happening
by setting failbit when eofbit is already set. It could either be the sentry
object or the extractor. It seems that most implementations have chosen to
set failbit in the sentry [...] so that's the text that will need to be
corrected.

Pre Berlin: This issue is related to 342. If the sentry
sets failbit when it finds eofbit already set, then
you can never seek away from the end of stream.

Kona: Possibly NAD. If eofbit is set then good() will return false. We
then set ok to false. We believe that the sentry's
constructor should always set failbit when ok is false, and
we also think the standard already says that. Possibly it could be
clearer.

420. is std::FILE a complete type?

7.19.1, p2, of C99 requires that the FILE type only be declared in
<stdio.h>. None of the (implementation-defined) members of the
struct is mentioned anywhere for obvious reasons.

C++ says in 27.8.1, p2 that FILE is a type that's defined in <cstdio>. Is
it really the intent that FILE be a complete type or is an implementation
allowed to just declare it without providing a full definition?

Proposed resolution:

In the first sentence of 27.9.1 [fstreams] paragraph 2, change
"defined" to "declared".

Rationale:

We don't want to impose any restrictions beyond what the C standard
already says. We don't want to make anything implementation defined,
because that imposes new requirements in implementations.

422. explicit specializations of member functions of class templates

It has been suggested that 17.4.3.1, p1 may or may not allow programs to
explicitly specialize members of standard templates on user-defined types.
The answer to the question might have an impact where library requirements
are given using the "as if" rule. I.e., if programs are allowed to specialize
member functions they will be able to detect an implementation's strict
conformance to Effects clauses that describe the behavior of the function
in terms of the other member function (the one explicitly specialized by
the program) by relying on the "as if" rule.

Proposed resolution:

Add the following sentence to 17.6.4.3 [reserved.names], p1:

It is undefined for a C++ program to add declarations or definitions to
namespace std or namespaces within namespace std unless otherwise specified. A
program may add template specializations for any standard library template to
namespace std. Such a specialization (complete or partial) of a standard library
template results in undefined behavior unless the declaration depends on a
user-defined type of external linkage and unless the specialization meets the
standard library requirements for the original template.168)A program has undefined behavior if it declares

an explicit specialization of any member function of a standard
library class template, or

an explicit specialization of any member function template of a
standard library class or class template, or

an explicit or partial specialization of any member class
template of a standard library class or class template.

A program may explicitly instantiate any templates in the standard library only
if the declaration depends on the name of a user-defined type of external
linkage and the instantiation meets the standard library requirements for the
original template.

[Kona: straw poll was 6-1 that user programs should not be
allowed to specialize individual member functions of standard
library class templates, and that doing so invokes undefined
behavior. Post-Kona: Martin provided wording.]

[Sydney: The LWG agrees that the standard shouldn't permit users
to specialize individual member functions unless they specialize the
whole class, but we're not sure these words say what we want them to;
they could be read as prohibiting the specialization of any standard
library class templates. We need to consult with CWG to make sure we
use the right wording.]

425. return value of std::get_temporary_buffer

The standard is not clear about the requirements on the value returned from
a call to get_temporary_buffer(0). In particular, it fails to specify whether
the call should return a distinct pointer each time it is called (like
operator new), or whether the value is unspecified (as if returned by
malloc). The standard also fails to mention what the required behavior
is when the argument is less than 0.

Proposed resolution:

Change 20.9.3 [meta.help] paragraph 2 from "...or a pair of 0
values if no storage can be obtained" to "...or a pair of 0 values if
no storage can be obtained or if n <= 0."

In addition, the Requirements or the Effects clauses for the latter two
templates don't say anything about the behavior when n is negative.

Proposed resolution:

Change 25.1.9, p7 to

Complexity: At most (last1 - first1) * count applications
of the corresponding predicate if count is positive,
or 0 otherwise.

Change 25.2.5, p2 to

Effects: Assigns value through all the iterators in the range [first,
last), or [first, first + n) if n is positive, none otherwise.

Change 25.2.5, p3 to:

Complexity: Exactly last - first (or n if n is positive,
or 0 otherwise) assignments.

Change 25.2.6, p1
to (notice the correction for the misspelled "through"):

Effects: Invokes the function object genand assigns the return
value of gen through all the iterators in the range [first, last),
or [first, first + n) if n is positive, or [first, first)
otherwise.

Change 25.2.6, p3 to:

Complexity: Exactly last - first (or n if n is positive,
or 0 otherwise) assignments.

Rationale:

Informally, we want to say that whenever we see a negative number
we treat it the same as if it were zero. We believe the above
changes do that (although they may not be the minimal way of saying
so). The LWG considered and rejected the alternative of saying that
negative numbers are undefined behavior.

The requirements specified in Stage 2 and reiterated in the rationale
of DR 221 (and echoed again in DR 303) specify that num_get<charT>::
do_get() compares characters on the stream against the widened elements
of "012...abc...ABCX+-"

An implementation is required to allow programs to instantiate the num_get
template on any charT that satisfies the requirements on a user-defined
character type. These requirements do not include the ability of the
character type to be equality comparable (the char_traits template must
be used to perform tests for equality). Hence, the num_get template cannot
be implemented to support any arbitrary character type. The num_get template
must either make the assumption that the character type is equality-comparable
(as some popular implementations do), or it may use char_traits<charT> to do
the comparisons (some other popular implementations do that). This diversity
of approaches makes it difficult to write portable programs that attempt to
instantiate the num_get template on user-defined types.

[Kona: the heart of the problem is that we're theoretically
supposed to use traits classes for all fundamental character
operations like assignment and comparison, but facets don't have
traits parameters. This is a fundamental design flaw and it
appears all over the place, not just in this one place. It's not
clear what the correct solution is, but a thorough review of facets
and traits is in order. The LWG considered and rejected the
possibility of changing numeric facets to use narrowing instead of
widening. This may be a good idea for other reasons (see issue
459), but it doesn't solve the problem raised by this
issue. Whether we use widen or narrow the num_get facet
still has no idea which traits class the user wants to use for
the comparison, because only streams, not facets, are passed traits
classes. The standard does not require that two different
traits classes with the same char_type must necessarily
have the same behavior.]

Informally, one possibility: require that some of the basic
character operations, such as eq, lt,
and assign, must behave the same way for all traits classes
with the same char_type. If we accept that limitation on
traits classes, then the facet could reasonably be required to
use char_traits<charT>.

[
2009-07 Frankfurt
]

There was general agreement that the standard only needs to specify the
behavior when the character type is char or wchar_t.

Beman: we don't need to worry about C++1x because there is a non-zero
possibility that we would have a replacement facility for iostreams that
would solve these problems.

We need to change the following sentence in [locale.category], paragraph
6 to specify that C is char and wchar_t:

"A template formal parameter with name C represents the set of all
possible specializations on a parameter that satisfies the requirements
for a character on which any member of the iostream components can be
instantiated."

We also need to specify in 27 that the basic character operations, such
as eq, lt, and assign use std::char_traits.

Daniel volunteered to provide wording.

[
2009-09-19 Daniel provided wording.
]

[
2009-10 Santa Cruz:
]

Leave as Open. Alisdair and/or Tom will provide wording based on discussions.
We want to clearly state that streams and locales work just on char
and wchar_t (except where otherwise specified).

[
2010-02-06 Tom updated the proposed wording.
]

[
The original proposed wording is preserved here:
]

Change 22.3.1.1.1 [locale.category]/6:

[..] A template formal parameter with name C represents the set of all possible
specializations on a char or wchar_t parameter that satisfies
the requirements for a character on which any of the iostream components
can be instantiated. [..]

Add the following sentence to the end of 22.4.2 [category.numeric]/2:

[..] These specializations refer to [..], and also for the ctype<> facet to
perform character classification. Implementations are encouraged
but not required to use the char_traits<charT> functions for all
comparisons and assignments of characters of type charT that do
not belong to the set of required specializations.

Change 22.4.2.1.2 [facet.num.get.virtuals]/3:

Stage 2: If in==end then stage 2 terminates. Otherwise a charT is taken
from in and local variables are initialized as if by

[Remark of the author: I considered to replace the initialization
"char_type ct = *in;"
by the sequence "char_type ct; tr::assign(ct, *in);", but decided
against it, because
it is a copy-initialization context, not an assignment]

Add the following sentence to the end of 22.4.5 [category.time]/1:

[..] Their members use [..] , to determine formatting details.
Implementations are encouraged but not required to use the
char_traits<charT> functions for all comparisons and assignments
of characters of type charT that do
not belong to the set of required specializations.

Change 22.4.5.1.1 [locale.time.get.members]/8 bullet 4:

The next element of fmt is equal to '%'For the next element c
of fmt char_traits<char_type>::eq(c, use_facet<ctype<char_type>>(f.getloc()).widen('%')) == true,
[..]

Add the following sentence to the end of 22.4.6 [category.monetary]/2:

Their members use [..] to determine formatting details.
Implementations are encouraged but not required to use the
char_traits<charT> functions for all comparisons and assignments
of characters of type charT that do
not belong to the set of required specializations.

footnote) If the traits of the output stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote after the first sentence of 27.7.5 [ext.manip]/4:

Returns: An object of unspecified type such that if in is an object of type
basic_istream<charT, traits> then the expression in >> get_money(mon, intl)
behaves as if it called f(in, mon, intl), where the function f is defined
as:(footnote) [..]

footnote) If the traits of the input stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote after the first sentence of 27.7.5 [ext.manip]/5:

Returns: An object of unspecified type such that if out is an object of type
basic_ostream<charT, traits> then the expression out << put_money(mon, intl)
behaves as a formatted input function that calls f(out, mon, intl), where the
function f is defined as:(footnote) [..]

footnote) If the traits of the output stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

13) Add a footnote after the first sentence of 27.7.5 [ext.manip]/8:

Returns: An object of unspecified type such that if in is an
object of type basic_istream<charT, traits> then the expression
in >>get_time(tmb, fmt) behaves as if it called f(in, tmb, fmt),
where the function f is defined as:(footnote) [..]

footnote) If the traits of the input stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote after the first sentence of 27.7.5 [ext.manip]/10:

Returns: An object of unspecified type such that if out is an object of type
basic_ostream<charT, traits> then the expression out <<put_time(tmb, fmt)
behaves as if it called f(out, tmb, fmt), where the function f is defined
as:(footnote) [..]

footnote) If the traits of the output stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

[
2010 Pittsburgh:
]

Moved to Ready with only two of the bullets. The original wording is preserved
here:

Change 22.3.1.1.1 [locale.category]/6:

[..] A template formal parameter with name C represents
the set
of all possible specializations on aof types containing char, wchar_t,
and any other implementation-defined character type
parameter
that satisfies
the requirements for a character on which any of the iostream components
can be instantiated. [..]

Add the following sentence to the end of 22.4.2 [category.numeric]/2:

[..] These specializations refer to [..], and also for the ctype<> facet to
perform character classification. [Note: Implementations are encouraged
but not required to use the char_traits<charT> functions for all
comparisons and assignments of characters of type charT that do
not belong to the set of required specializations - end note].

Change 22.4.2.1.2 [facet.num.get.virtuals]/3:

Stage 2: If in==end then stage 2 terminates. Otherwise a charT is taken
from in and local variables are initialized as if by

[Remark of the author: I considered to replace the initialization
"char_type ct = *in;"
by the sequence "char_type ct; tr::assign(ct, *in);", but decided
against it, because
it is a copy-initialization context, not an assignment]

Add the following sentence to the end of 22.4.5 [category.time]/1:

[..] Their members use [..] , to determine formatting details.
[Note: Implementations are encouraged but not required to use the
char_traits<charT> functions for all comparisons and assignments
of characters of type charT that do
not belong to the set of required specializations - end note].

Change 22.4.5.1.1 [locale.time.get.members]/8 bullet 4:

The next element of fmt is equal to '%'For the next element c
of fmt char_traits<char_type>::eq(c, use_facet<ctype<char_type>>(f.getloc()).widen('%')) == true,
[..]

Add the following sentence to the end of 22.4.6 [category.monetary]/2:

Their members use [..] to determine formatting details.
[Note: Implementations are encouraged but not required to use the
char_traits<charT> functions for all comparisons and assignments
of characters of type charT that do
not belong to the set of required specializations - end note].

[..] for character buffers buf1 and buf2. If for the first
character c
in digits or buf2is equal to
ct.widen('-')char_traits<charT>::eq(c,
ct.widen('-')) == true, [..]

Add a new paragraph after the
first paragraph of 27.2.2 [iostreams.limits.pos]/1:

In the classes of clause 27,
a template formal parameter with name charT represents
one of
the set of types
containing char, wchar_t,
and any other implementation-defined character type
that satisfies
the requirements for a character on which any of the iostream components
can be instantiated.

Add a footnote to the first sentence of 27.7.2.2.2 [istream.formatted.arithmetic]/1:

As in the case of the inserters, these extractors depend on the locale's
num_get<> (22.4.2.1) object to perform parsing the input stream
data.(footnote) [..]

footnote) If the traits of the input stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote to the second sentence of 27.7.3.6.2 [ostream.inserters.arithmetic]/1:

footnote) If the traits of the output stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote after the first sentence of 27.7.5 [ext.manip]/4:

Returns: An object of unspecified type such that if in is an object of type
basic_istream<charT, traits> then the expression in >> get_money(mon, intl)
behaves as if it called f(in, mon, intl), where the function f is defined
as:(footnote) [..]

footnote) If the traits of the input stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote after the first sentence of 27.7.5 [ext.manip]/5:

Returns: An object of unspecified type such that if out is an object of type
basic_ostream<charT, traits> then the expression out << put_money(mon, intl)
behaves as a formatted input function that calls f(out, mon, intl), where the
function f is defined as:(footnote) [..]

footnote) If the traits of the output stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote after the first sentence of 27.7.5 [ext.manip]/8:

Returns: An object of unspecified type such that if in is an
object of type basic_istream<charT, traits> then the expression
in >>get_time(tmb, fmt) behaves as if it called f(in, tmb, fmt),
where the function f is defined as:(footnote) [..]

footnote) If the traits of the input stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Add a footnote after the first sentence of 27.7.5 [ext.manip]/10:

Returns: An object of unspecified type such that if out is an object of type
basic_ostream<charT, traits> then the expression out <<put_time(tmb, fmt)
behaves as if it called f(out, tmb, fmt), where the function f is defined
as:(footnote) [..]

footnote) If the traits of the output stream has different semantics for lt(),
eq(), and assign() than char_traits<char_type>, this may give surprising
results.

Proposed resolution:

Change 22.3.1.1.1 [locale.category]/6:

[..] A template formal parameter with name C represents
the set
of all possible specializations on aof types containing char, wchar_t,
and any other implementation-defined character type
parameter
that satisfies
the requirements for a character on which any of the iostream components
can be instantiated. [..]

Add a new paragraph after the
first paragraph of 27.2.2 [iostreams.limits.pos]/1:

In the classes of clause 27,
a template formal parameter with name charT represents
one of
the set of types
containing char, wchar_t,
and any other implementation-defined character type
that satisfies
the requirements for a character on which any of the iostream components
can be instantiated.

428. string::erase(iterator) validity

23.1.1, p3 along with Table 67 specify as a prerequisite for a.erase(q)
that q must be a valid dereferenceable iterator into the sequence a.

However, 21.3.5.5, p5 describing string::erase(p) only requires that
p be a valid iterator.

This may be interepreted as a relaxation of the general requirement,
which is most likely not the intent.

Proposed resolution:

Remove 21.4.6.5 [string::erase] paragraph 5.

Rationale:

The LWG considered two options: changing the string requirements to
match the general container requirements, or just removing the
erroneous string requirements altogether. The LWG chose the latter
option, on the grounds that duplicating text always risks the
possibility that it might be duplicated incorrectly.

430. valarray subset operations

The standard fails to specify the behavior of valarray::operator[](slice)
and other valarray subset operations when they are passed an "invalid"
slice object, i.e., either a slice that doesn't make sense at all (e.g.,
slice (0, 1, 0) or one that doesn't specify a valid subset of the valarray
object (e.g., slice (2, 1, 1) for a valarray of size 1).

[Kona: the LWG believes that invalid slices should invoke
undefined behavior. Valarrays are supposed to be designed for high
performance, so we don't want to require specific checking. We
need wording to express this decision.]

[
Bellevue:
]

Please note that the standard also fails to specify the behavior of
slice_array and gslice_array in the valid case. Bill Plauger will
endeavor to provide revised wording for slice_array and gslice_array.

[
post-Bellevue: Bill provided wording.
]

[
2009-07 Frankfurt
]

Move to Ready.

[
2009-11-04 Pete opens:
]

The resolution to LWG issue 430 has not been applied — there have been
changes to the underlying text, and the resolution needs to be reworked.

[
2010-03-09 Matt updated wording.
]

[
2010 Pittsburgh: Moved to Ready for Pittsburgh.
]

Proposed resolution:

Replace 26.6.2.5 [valarray.sub], with the following:

The member operator is overloaded to provide several ways to select
sequences of elements from among those controlled by *this.
Each of these operations returns a subset of the array. The
const-qualified versions return this subset as a new valarray. The
non-const versions return a class template object which has reference
semantics to the original array, working in conjunction with various
overloads of operator= (and other assigning operators) to allow
selective replacement (slicing) of the controlled sequence. In each case
the selected element(s) must exist.

valarray<T> operator[](slice slicearr) const;

This function returns an object of class valarray<T>
containing those elements of the controlled sequence designated by
slicearr. [Example:

Clause 17.6.3.5 [allocator.requirements] paragraph 4 says that implementations
are permitted to supply containers that are unable to cope with
allocator instances and that container implementations may assume
that all instances of an allocator type compare equal. We gave
implementers this latitude as a temporary hack, and eventually we
want to get rid of it. What happens when we're dealing with
allocators that don't compare equal?

In particular: suppose that v1 and v2 are both
objects of type vector<int, my_alloc> and that
v1.get_allocator() != v2.get_allocator(). What happens if
we write v1.swap(v2)? Informally, three possibilities:

1. This operation is illegal. Perhaps we could say that an
implementation is required to check and to throw an exception, or
perhaps we could say it's undefined behavior.

2. The operation performs a slow swap (i.e. using three
invocations of operator=, leaving each allocator with its
original container. This would be an O(N) operation.

3. The operation swaps both the vectors' contents and their
allocators. This would be an O(1) operation. That is:

[
2007-01-12, Howard: This issue will now tend to come up more often with move constructors
and move assignment operators. For containers, these members transfer resources (i.e.
the allocated memory) just like swap.
]

[
Batavia: There is agreement to overload the container swap on the allocator's Swappable
requirement using concepts. If the allocator supports Swappable, then container's swap will
swap allocators, else it will perform a "slow swap" using copy construction and copy assignment.
]

[
2009-04-28 Pablo adds:
]

Fixed in
N2525.
I argued for marking this Tentatively-Ready right after Bellevue,
but there was a concern that
N2525
would break in the presence of the RVO. (That breakage had nothing to do with
swap, but never-the-less). I addressed that breakage in in
N2840
(Summit) by means of a non-normative reference:

[Note: in situations where the copy constructor for a container is elided,
this function is not called. The behavior in these cases is as if
select_on_container_copy_construction returned x — end note]

Notes: The function can make a write position available only if
( mode & ios_base::out) != 0. To make a write position
available, the function reallocates (or initially allocates) an
array object with a sufficient number of elements to hold the
current array object (if any), plus one additional write position.
If ( mode & ios_base::in) != 0, the function alters the read end
pointer egptr() to point just past the new write position (as
does the write end pointer epptr()).

The sentences "plus one additional write position." and especially
"(as does the write end pointer epptr())" COULD by interpreted
(and is interpreted by at least my library vendor) as:

post-condition: epptr() == pptr()+1

This WOULD force sputc() to call the virtual overflow() each time.

The proposed change also affects Defect Report 169.

Proposed resolution:

27.7.1.1/2 Change:

2- Notes: The function allocates no array object.

to:

2- Postcondition: str() == "".

27.7.1.1/3 Change:

-3- Effects: Constructs an object of class basic_stringbuf,
initializing the base class with basic_streambuf()
(lib.streambuf.cons), and initializing mode with which . Then copies
the content of str into the basic_stringbuf underlying character
sequence and initializes the input and output sequences according to
which. If which & ios_base::out is true, initializes the output
sequence with the underlying sequence. If which & ios_base::in is
true, initializes the input sequence with the underlying sequence.

to:

-3- Effects: Constructs an object of class basic_stringbuf,
initializing the base class with basic_streambuf()
(lib.streambuf.cons), and initializing mode with which. Then copies
the content of str into the basic_stringbuf underlying character
sequence. If which & ios_base::out is true, initializes the output
sequence such that pbase() points to the first underlying character,
epptr() points one past the last underlying character, and if (which &
ios_base::ate) is true, pptr() is set equal to
epptr() else pptr() is set equal to pbase(). If which & ios_base::in
is true, initializes the input sequence such that eback() and gptr()
point to the first underlying character and egptr() points one past
the last underlying character.

27.7.1.2/1 Change:

-1- Returns: A basic_string object whose content is equal to the
basic_stringbuf underlying character sequence. If the buffer is only
created in input mode, the underlying character sequence is equal to
the input sequence; otherwise, it is equal to the output sequence. In
case of an empty underlying character sequence, the function returns
basic_string<charT,traits,Allocator>().

to:

-1- Returns: A basic_string object whose content is equal to the
basic_stringbuf underlying character sequence. If the basic_stringbuf
was created only in input mode, the resultant basic_string contains
the character sequence in the range [eback(), egptr()). If the
basic_stringbuf was created with (which & ios_base::out) being true
then the resultant basic_string contains the character sequence in the
range [pbase(), high_mark) where high_mark represents the position one
past the highest initialized character in the buffer. Characters can
be initialized either through writing to the stream, or by
constructing the basic_stringbuf with a basic_string, or by calling
the str(basic_string) member function. In the case of calling the
str(basic_string) member function, all characters initialized prior to
the call are now considered uninitialized (except for those
characters re-initialized by the new basic_string). Otherwise the
basic_stringbuf has been created in neither input nor output mode and
a zero length basic_string is returned.

27.7.1.2/2 Change:

-2- Effects: If the basic_stringbuf's underlying character sequence is
not empty, deallocates it. Then copies the content of s into the
basic_stringbuf underlying character sequence and initializes the
input and output sequences according to the mode stored when creating
the basic_stringbuf object. If (mode&ios_base::out) is true, then
initializes the output sequence with the underlying sequence. If
(mode&ios_base::in) is true, then initializes the input sequence with
the underlying sequence.

to:

-2- Effects: Copies the content of s into the basic_stringbuf
underlying character sequence. If mode & ios_base::out is true,
initializes the output sequence such that pbase() points to the first
underlying character, epptr() points one past the last underlying
character, and if (mode & ios_base::ate) is true,
pptr() is set equal to epptr() else pptr() is set equal to pbase(). If
mode & ios_base::in is true, initializes the input sequence such that
eback() and gptr() point to the first underlying character and egptr()
points one past the last underlying character.

1- Returns: If the input sequence has a read position available,
returns traits::to_int_type(*gptr()). Otherwise, returns
traits::eof().

to:

1- Returns: If the input sequence has a read position available,
returns traits::to_int_type(*gptr()). Otherwise, returns
traits::eof(). Any character in the underlying buffer which has been
initialized is considered to be part of the input sequence.

27.7.1.3/9 Change:

-9- Notes: The function can make a write position available only if (
mode & ios_base::out) != 0. To make a write position available, the
function reallocates (or initially allocates) an array object with a
sufficient number of elements to hold the current array object (if
any), plus one additional write position. If ( mode & ios_base::in) !=
0, the function alters the read end pointer egptr() to point just past
the new write position (as does the write end pointer epptr()).

to:

-9- The function can make a write position available only if ( mode &
ios_base::out) != 0. To make a write position available, the function
reallocates (or initially allocates) an array object with a sufficient
number of elements to hold the current array object (if any), plus one
additional write position. If ( mode & ios_base::in) != 0, the
function alters the read end pointer egptr() to point just past the
new write position.

[post-Kona: Howard provided wording. At Kona the LWG agreed that
something along these lines was a good idea, but the original
proposed resolution didn't say enough about the effect of various
member functions on the underlying character sequences.]

Rationale:

The current basic_stringbuf description is over-constrained in such
a way as to prohibit vendors from making this the high-performance
in-memory stream it was meant to be. The fundamental problem is that
the pointers: eback(), gptr(), egptr(), pbase(), pptr(), epptr() are
observable from a derived client, and the current description
restricts the range [pbase(), epptr()) from being grown geometrically.
This change allows, but does not require, geometric growth of this
range.

Backwards compatibility issues: These changes will break code that
derives from basic_stringbuf, observes epptr(), and depends upon
[pbase(), epptr()) growing by one character on each call to overflow()
(i.e. test suites). Otherwise there are no backwards compatibility
issues.

27.7.1.1/2: The non-normative note is non-binding, and if it were
binding, would be over specification. The recommended change focuses
on the important observable fact.

27.7.1.1/3: This change does two things: 1. It describes exactly
what must happen in terms of the sequences. The terms "input
sequence" and "output sequence" are not well defined. 2. It
introduces a common extension: open with app or ate mode. I concur
with issue 238 that paragraph 4 is both wrong and unnecessary.

27.7.1.2/1: This change is the crux of the efficiency issue. The
resultant basic_string is not dependent upon epptr(), and thus
implementors are free to grow the underlying buffer geometrically
during overflow() *and* place epptr() at the end of that buffer.

27.7.1.2/2: Made consistent with the proposed 27.7.1.1/3.

27.7.1.3/1: Clarifies that characters written to the stream beyond
the initially specified string are available for reading in an i/o
basic_streambuf.

27.7.1.3/12: Restricting the positioning to [xbeg, xend) is no
longer allowable since [pbase(), epptr()) may now contain
uninitialized characters. Positioning is only allowable over the
initialized range.

434. bitset::to_string() hard to use

It has been pointed out a number of times that the bitset to_string() member
function template is tedious to use since callers must explicitly specify the
entire template argument list (3 arguments). At least two implementations
provide a number of overloads of this template to make it easier to use.

Proposed resolution:

In order to allow callers to specify no template arguments at all, just the
first one (charT), or the first 2 (charT and traits), in addition to all
three template arguments, add the following three overloads to both the
interface (declarations only) of the class template bitset as well as to
section 23.3.5.2, immediately after p34, the Returns clause of the existing
to_string() member function template:

[Kona: the LWG agrees that this is an improvement over the
status quo. Dietmar thought about an alternative using a proxy
object but now believes that the proposed resolution above is the
right choice.
]

435. bug in DR 25

It has been pointed out that the proposed resolution in DR 25 may not be
quite up to snuff:
http://gcc.gnu.org/ml/libstdc++/2003-09/msg00147.html
http://anubis.dkuug.dk/jtc1/sc22/wg21/docs/lwg-defects.html#25

It looks like Petur is right. The complete corrected text is copied below.
I think we may have have been confused by the reference to 22.2.2.2.2 and
the subsequent description of `n' which actually talks about the second
argument to sputn(), not about the number of fill characters to pad with.

So the question is: was the original text correct? If the intent was to
follow classic iostreams then it most likely wasn't, since setting width()
to less than the length of the string doesn't truncate it on output. This
is also the behavior of most implementations (except for SGI's standard
iostreams where the operator does truncate).

Proposed resolution:

Change the text in 21.3.7.9, p4 from

If bool(k) is true, inserts characters as if by calling
os.rdbuf()->sputn(str.data(), n), padding as described in stage 3
of lib.facet.num.put.virtuals, where n is the larger of os.width()
and str.size();

to

If bool(k) is true, determines padding as described in
lib.facet.num.put.virtuals, and then inserts the resulting
sequence of characters seq as if by calling
os.rdbuf()->sputn(seq, n), where n is the larger of
os.width() and str.size();

[Kona: it appears that neither the original wording, DR25, nor the
proposed resolution, is quite what we want. We want to say that
the string will be output, padded to os.width() if necessary. We
don't want to duplicate the padding rules in clause 22, because
they're complicated, but we need to be careful because they weren't
quite written with quite this case in mind. We need to say what
the character sequence is, and then defer to clause 22. Post-Kona:
Benjamin provided wording.]

436. are cv-qualified facet types valid facets?

Is "const std::ctype<char>" a valid template argument to has_facet, use_facet,
and the locale template ctor? And if so, does it designate the same Facet as
the non-const "std::ctype<char>?" What about "volatile std::ctype<char>?"
Different implementations behave differently: some fail to compile, others
accept such types but behave inconsistently.

Proposed resolution:

Change 22.1.1.1.2, p1 to read:

Template parameters in this clause which are required to be facets
are those named Facet in declarations. A program that passes a type
that is not a facet, or a type that refers to volatile-qualified
facet, as an (explicit or deduced) template parameter to a locale
function expecting a facet, is ill-formed. A const-qualified facet is
a valid template argument to any locale function that expects a Facet
template parameter.

The second line of this snippet is likely an error. Solution A catches
the error and refuses to compile. The reason is that there is no
specialization of the member template constructor that looks like:

So the expression binds to the unspecialized member template
constructor, and then fails (compile time) because char is not an
InputIterator.

Solution B compiles the above example though. 'a' is casted to an
unsigned integral type and used to size the outer vector. 'b' is
static casted to the inner vector using it's explicit constructor:

explicit vector(size_type n);

and so you end up with a static_cast<size_type>('a') by
static_cast<size_type>('b') matrix.

It is certainly possible that this is what the coder intended. But the
explicit qualifier on the inner vector has been thwarted at any rate.

The standard is not clear whether the expression:

vector<vector<pair<char, char> > > d('a', 'b');

(and similar expressions) are:

undefined behavior.

illegal and must be rejected.

legal and must be accepted.

My preference is listed in the order presented.

There are still other techniques for implementing the requirements of
paragraphs 9-11, namely the "restricted template technique" (e.g.
enable_if). This technique is the most compact and easy way of coding
the requirements, and has the behavior of #2 (rejects the above
expression).

In the previous paragraph the alternative binding will fail if f
is not implicitly convertible to X::size_type or if l is not implicitly
convertible to X::value_type.

The extent to which an implementation determines that a type cannot be
an input iterator is unspecified, except that as a minimum integral
types shall not qualify as input iterators.

[
Kona: agreed that the current standard requires v('a', 'b')
to be accepted, and also agreed that this is surprising behavior. The
LWG considered several options, including something like
implicit_cast, which doesn't appear to be quite what we want. We
considered Howards three options: allow acceptance or rejection,
require rejection as a compile time error, and require acceptance. By
straw poll (1-6-1), we chose to require a compile time error.
Post-Kona: Howard provided wording.
]

[
Sydney: The LWG agreed with this general direction, but there was some
discomfort with the wording in the original proposed resolution.
Howard submitted new wording, and we will review this again in
Redmond.
]

[Redmond: one very small change in wording: the first argument
is cast to size_t. This fixes the problem of something like
vector<vector<int> >(5, 5), where int is not
implicitly convertible to the value type.]

Rationale:

The proposed resolution fixes:

vector<int> v(10, 1);

since as integral types 10 and 1 must be disqualified as input
iterators and therefore the (size,value) constructor is called (as
if).

The proposed resolution breaks:

vector<vector<T> > v(10, 1);

because the integral type 1 is not *implicitly* convertible to
vector<T>. The wording above requires a diagnostic.

The proposed resolution leaves the behavior of the following code
unspecified.

The implementation may or may not detect that A is not an input
iterator and employee the (size,value) constructor. Note though that
in the above example if the B(A) constructor is qualified explicit,
then the implementation must reject the constructor as A is no longer
implicitly convertible to B.

442. sentry::operator bool() inconsistent signature

In section 27.7.3.4 [ostream::sentry] paragraph 4, in description part
basic_ostream<charT, traits>::sentry::operator bool() is declared
as non const, but in section 27.6.2.3, in synopsis it is declared
const.

The standard places no restrictions at all on the reference type
of input, output, or forward iterators (for forward iterators it
only specifies that *x must be value_type& and doesn't mention
the reference type). Bidirectional iterators' reference type is
restricted only by implication, since the base iterator's
reference type is used as the return type of reverse_iterator's
operator*, which must be T& in order to be a conforming forward
iterator.

Here's what I think we ought to be able to expect from an input
or forward iterator's reference type R, where a is an iterator
and V is its value_type

*a is convertible to R

R is convertible to V

static_cast<V>(static_cast<R>(*a)) is equivalent to
static_cast<V>(*a)

A mutable forward iterator ought to satisfy, for x of type V:

{ R r = *a; r = x; } is equivalent to *a = x;

I think these requirements capture existing container iterators
(including vector<bool>'s), but render istream_iterator invalid;
its reference type would have to be changed to a constant
reference.

(Jeremy Siek) During the discussion in Sydney, it was felt that a
simpler long term solution for this was needed. The solution proposed
was to require reference to be the same type as *a
and pointer to be the same type as a->. Most
iterators in the Standard Library already meet this requirement. Some
iterators are output iterators, and do not need to meet the
requirement, and others are only specified through the general
iterator requirements (which will change with this resolution). The
sole case where there is an explicit definition of the reference type
that will need to change is istreambuf_iterator which returns
charT from operator* but has a reference type of
charT&. We propose changing the reference type of
istreambuf_iterator to charT.

The other option for resolving the issue with pointer,
mentioned in the note below, is to remove pointer
altogether. I prefer placing requirements on pointer to
removing it for two reasons. First, pointer will become
useful for implementing iterator adaptors and in particular,
reverse_iterator will become more well defined. Second,
removing pointer is a rather drastic and publicly-visible
action to take.

The proposed resolution technically enlarges the requirements for
iterators, which means there are existing iterators (such as
istreambuf_iterator, and potentially some programmer-defined
iterators) that will no longer meet the requirements. Will this break
existing code? The scenario in which it would is if an algorithm
implementation (say in the Standard Library) is changed to rely on
iterator_traits::reference, and then is used with one of the
iterators that do not have an appropriately defined
iterator_traits::reference.

The proposed resolution makes one other subtle change. Previously,
it was required that output iterators have a difference_type
and value_type of void, which means that a forward
iterator could not be an output iterator. This is clearly a mistake,
so I've changed the wording to say that those types may be
void.

Proposed resolution:

In 24.4.1 [iterator.traits], after:

be defined as the iterator's difference type, value type and iterator
category, respectively.

[
Redmond: there was concern in Sydney that this might not be the only place
where things were underspecified and needed to be changed. Jeremy
reviewed iterators in the standard and confirmed that nothing else
needed to be changed.
]

Table 76, the random access iterator requirement table, says that the
return type of a[n] must be "convertible to T". When an iterator's
value_type T is an abstract class, nothing is convertible to T.
Surely this isn't an intended restriction?

The macro offsetof accepts a restricted set of type arguments in this
International Standard. type shall be a POD structure or a POD union
(clause 9). The result of applying the offsetof macro to a field that
is a static data member or a function member is undefined."

Revised text:

"If type is not a POD structure or a POD union the results are undefined."

Looks to me like the revised text should have replaced only the second
sentence. It doesn't make sense standing alone.

Proposed resolution:

Change 18.1, paragraph 5, to:

The macro offsetof accepts a restricted set of type arguments in this
International Standard. If type is not a POD structure or a POD union
the results are undefined. The result of applying the offsetof macro
to a field that is a static data member or a function member is
undefined."

453. basic_stringbuf::seekoff need not always fail for an empty stream

455. cerr::tie() and wcerr::tie() are overspecified

Both cerr::tie() and wcerr::tie() are obliged to be null at program
startup. This is overspecification and overkill. It is both traditional
and useful to tie cerr to cout, to ensure that standard output is drained
whenever an error message is written. This behavior should at least be
permitted if not required. Same for wcerr::tie().

Proposed resolution:

Add to the description of cerr:

After the object cerr is initialized, cerr.tie() returns &cout.
Its state is otherwise the same as required for basic_ios<char>::init
(lib.basic.ios.cons).

Add to the description of wcerr:

After the object wcerr is initialized, wcerr.tie() returns &wcout.
Its state is otherwise the same as required for basic_ios<wchar_t>::init
(lib.basic.ios.cons).

[Sydney: straw poll (3-1): we should require, not just
permit, cout and cerr to be tied on startup. Pre-Redmond: Bill will
provide wording.]

456. Traditional C header files are overspecified

The C++ Standard effectively requires that the traditional C headers
(of the form <xxx.h>) be defined in terms of the newer C++
headers (of the form <cxxx>). Clauses 17.4.1.2/4 and D.5 combine
to require that:

Including the header <cxxx> declares a C name in namespace std.

Including the header <xxx.h> declares a C name in namespace std
(effectively by including <cxxx>), then imports it into the global
namespace with an individual using declaration.

The rules were left in this form despited repeated and heated objections
from several compiler vendors. The C headers are often beyond the direct
control of C++ implementors. In some organizations, it's all they can do
to get a few #ifdef __cplusplus tests added. Third-party library vendors
can perhaps wrap the C headers. But neither of these approaches supports
the drastic restructuring required by the C++ Standard. As a result, it is
still widespread practice to ignore this conformance requirement, nearly
seven years after the committee last debated this topic. Instead, what is
often implemented is:

Including the header <xxx.h> declares a C name in the
global namespace.

Including the header <cxxx> declares a C name in the
global namespace (effectively by including <xxx.h>), then
imports it into namespace std with an individual using declaration.

The practical benefit for implementors with the second approach is that
they can use existing C library headers, as they are pretty much obliged
to do. The practical cost for programmers facing a mix of implementations
is that they have to assume weaker rules:

If you want to assuredly declare a C name in the global
namespace, include <xxx.h>. You may or may not also get the
declaration in namespace std.

If you want to assuredly declare a C name in namespace std,
include <cxxx>. You may or may not also get the declaration in
the global namespace.

There also exists the possibility of subtle differences due to
Koenig lookup, but there are so few non-builtin types defined in the C
headers that I've yet to see an example of any real problems in this
area.

It is worth observing that the rate at which programmers fall afoul of
these differences has remained small, at least as measured by newsgroup
postings and our own bug reports. (By an overwhelming margin, the
commonest problem is still that programmers include <string> and can't
understand why the typename string isn't defined -- this a decade after
the committee invented namespace std, nominally for the benefit of all
programmers.)

We should accept the fact that we made a serious mistake and rectify it,
however belatedly, by explicitly allowing either of the two schemes for
declaring C names in headers.

[Sydney: This issue has been debated many times, and will
certainly have to be discussed in full committee before any action
can be taken. However, the preliminary sentiment of the LWG was in
favor of the change. (6 yes, 0 no, 2 abstain) Robert Klarer
suggests that we might also want to undeprecate the
C-style .h headers.]

Proposed resolution:

Add to 17.6.1.2 [headers], para. 4:

Except as noted in clauses 18 through 27 and Annex D, the contents of each
header cname shall be the same as that of the corresponding header
name.h, as specified in ISO/IEC 9899:1990 Programming Languages C (Clause
7), or ISO/IEC:1990 Programming Languages-C AMENDMENT 1: C Integrity, (Clause
7), as appropriate, as if by inclusion. In the C++ Standard Library, however,
the declarations and definitions (except for names which are defined
as macros in C) are within namespace scope (3.3.5) of the namespace std.
It is unspecified whether these names are first declared within the global
namespace scope and are then injected into namespace std by explicit
using-declarations (7.3.3 [namespace.udecl]).

Change D.7 [depr.c.headers], para. 2-3:

-2- Every C header, each of which has a name of the form name.h, behaves
as if each name placed in the Standard library namespace by the corresponding
cname header is also placed within the global
namespace scope.of the namespace std and is followed
by an explicit using-declaration (7.3.3 [namespace.udecl]).It is unspecified whether these names are first declared or defined within
namespace scope (3.3.6 [basic.scope.namespace]) of the namespace
std and are then injected into the global namespace scope by explicit
using-declarations (7.3.3 [namespace.udecl]).

-3- [Example: The header <cstdlib>assuredly
provides its declarations and definitions within the namespace std.
It may also provide these names within the global namespace. The
header <stdlib.h>makes these available also inassuredly provides the same declarations and definitions within the
global namespace, much as in the C Standard. It may also provide these
names within the namespace std.-- end example]

457. bitset constructor: incorrect number of initialized bits

The constructor from unsigned long says it initializes "the first M
bit positions to the corresponding bit values in val. M is the smaller
of N and the value CHAR_BIT * sizeof(unsigned long)."

Object-representation vs. value-representation strikes again. CHAR_BIT *
sizeof (unsigned long) does not give us the number of bits an unsigned long
uses to hold the value. Thus, the first M bit position above is not
guaranteed to have any corresponding bit values in val.

Proposed resolution:

In 20.5.1 [bitset.cons] paragraph 2, change "M is the smaller of
N and the value CHAR_BIT * sizeof (unsigned long). (249)" to
"M is the smaller of N and the number of bits in
the value representation (section 3.9 [basic.types]) of unsigned
long."

460. Default modes missing from basic_fstream member specifications

The second parameters of the non-default constructor and of the open
member function for basic_fstream, named "mode", are optional
according to the class declaration in 27.8.1.11 [lib.fstream]. The
specifications of these members in 27.8.1.12 [lib.fstream.cons] and
27.8.1.13 lib.fstream.members] disagree with this, though the
constructor declaration has the "explicit" function-specifier implying
that it is intended to be callable with one argument.