The purpose of this document is to record the status of issues
which have come before the Library Working Group (LWG) of the INCITS PL22.16
and ISO WG21 C++ Standards Committee. Issues represent
potential defects in the ISO/IEC IS 14882:2014(E) document.

The issues in these lists are not necessarily formal ISO Defect
Reports (DR's). While some issues will eventually be elevated to
official Defect Report status, other issues will be disposed of in
other ways. See Issue Status.

Prior to Revision 14, library issues lists existed in two slightly
different versions; a Committee Version and a Public
Version. Beginning with Revision 14 the two versions were combined
into a single version.

This document includes [bracketed italicized notes] as a
reminder to the LWG of current progress on issues. Such notes are
strictly unofficial and should be read with caution as they may be
incomplete or incorrect. Be aware that LWG support for a particular
resolution can quickly change if new viewpoints or killer examples are
presented in subsequent discussions.

For the most current official version of this document see
http://www.open-std.org/jtc1/sc22/wg21/.
Requests for further information about this document should include
the document number above, reference ISO/IEC 14882:2014(E), and be
submitted to Information Technology Industry Council (ITI), 1250 Eye
Street NW, Washington, DC 20005.

Public information as to how to obtain a copy of the C++ Standard,
join the standards committee, submit an issue, or comment on an issue
can be found in the comp.std.c++ FAQ.

How to submit an issue

Mail your issue to the author of this list.

Specify a short descriptive title. If you fail to do so, the subject line of your
mail will be used as the issue title.

If the "From" on your email is not the name you wish to appear as issue submitter,
then specify issue submitter.

Provide a brief discussion of the problem you wish to correct. Refer to the latest
working draft or standard using [section.tag] and paragraph numbers where appropriate.

Provide proposed wording. This should indicate exactly how you want the standard
to be changed. General solution statements belong in the discussion area. This
area contains very clear and specific directions on how to modify the current
draft. If you are not sure how to word a solution, you may omit this part.
But your chances of a successful issue greatly increase if you attempt wording.

It is not necessary for you to use html markup. However, if you want to, you can
<ins>insert text like this</ins> and <del>delete text like
this</del>. The only strict requirement is to communicate clearly to
the list maintainer exactly how you want your issue to look.

It is not necessary for you to specify other html font/formatting
mark-up, but if you do the list maintainer will attempt to respect your
formatting wishes (as described by html markup, or other common idioms).

It is not necessary for you to specify open date or last modified date (the date
of your mail will be used).

It is not necessary for you to cross reference other issues, but you can if you
like. You do not need to form the hyperlinks when you do, the list maintainer will
take care of that.

One issue per email is best.

Between the time you submit the issue, and the next mailing deadline
(date at the top of the Revision History), you own this issue.
You control the content, the stuff that is right, the stuff that is
wrong, the format, the misspellings, etc. You can even make the issue
disappear if you want. Just let the list maintainer know how you want
it to look, and he will try his best to accommodate you. After the
issue appears in an official mailing, you no longer enjoy exclusive
ownership of it.

Issue Status

Issues reported to the LWG transition through a variety of statuses,
indicating their progress towards a resolution. Typically, most issues
will flow through the following stages.

New - The issue has not yet been
reviewed by the LWG. Any Proposed Resolution is purely a
suggestion from the issue submitter, and should not be construed as
the view of LWG.

Open - The LWG has discussed the issue
but is not yet ready to move the issue forward. There are several
possible reasons for open status:

Consensus may have not yet have been reached as to how to deal
with the issue.

Informal consensus may have been reached, but the LWG awaits
exact Proposed Resolution wording for review.

The LWG wishes to consult additional technical experts before
proceeding.

The issue may require further study.

A Proposed Resolution for an open issue is still not be
construed as the view of LWG. Comments on the current state of
discussions are often given at the end of open issues in an italic
font. Such comments are for information only and should not be given
undue importance.

Review - Exact wording of a
Proposed Resolution is now available for review on an issue
for which the LWG previously reached informal consensus.

Ready - The LWG has reached consensus
that the issue is a defect in the Standard, the Proposed
Resolution is correct, and the issue is ready to forward to the
full committee for further action as a Defect Report (DR).

Typically, an issue must have a proposed resolution in the currently
published issues list, whose wording does not change during LWG review, to
move to the Ready status.

Voting - This status should not be seen
in a published issues list, but is a marker for use during meetings to
indicate an issues was Ready in the pre-meeting mailing, the Proposed
Resolution is correct, and the issue will be offered to the working
group at the end of the current meeting to apply to the current working
paper (WP) or to close in some other appropriate manner. This easily
distinguishes such issues from those moving to Ready status during the
meeting itself, that should not be forwarded until the next meeting. If
the issue does not move forward, it should fall back to one of the other
open states before the next list is published.

Immediate - This status should not be
seen in a published issues list, but is a marker for use during meetings
to indicate an issues was not Ready in the pre-meeting mailing, but the
Proposed Resolution is correct, and the issue will be offered to
the working group at the end of the current meeting to apply to the
current working paper (WP) or to close in some other appropriate manner.
This status is used only rarely, typically for fixes that are both small
and obvious, and usually within a meeting of the expected publication of
a revised standard. If the issue does not move forward, it should fall
back to one of the other open states before the next list is published.

In addition, there are a few ways to categorise and issue that remains
open to a resolution within the library, but is not actively being worked
on.

Deferred - The LWG has discussed the issue,
is not yet ready to move the issue forward, but neither does it deem the
issue significant enough to delay publishing a standard or Technical Report.
A typical deferred issue would be seeking to clarify wording that might be
technically correct, but easily mis-read.

A Proposed Resolution for a deferred issue is still not be
construed as the view of LWG. Comments on the current state of
discussions are often given at the end of open issues in an italic
font. Such comments are for information only and should not be given
undue importance.

Core - The LWG has discussed the issue, and feels
that some key part of resolving the issue is better handled by a cleanup of
the language in the Core part of the standard. The issue is passed to the Core
Working Group, which should ideally open a corresponding issue that can be
linked from the library issue. Such issues will be revisitted after Core have
made (or declined to make) any changes.

EWG - The LWG has discussed the issue, and wonder
that some key part of resolving the issue is better handled by some (hopefully
small) extension to the language. The issue is passed to the Evolution Working
Group, which should ideally open a corresponding issue that can be linked from
the library issue. Such issues will be revisitted after Evoltion have made (or
declined to make) any recommendations. Positive recommendations from EWG will
often mean the issue transition to Core status while we wait for some
proposed new feature to land in the working paper.

LEWG - The LWG has discussed the issue, and deemd
the issue is either an extension, however small, or changes the library design
in some fundamental way, and so has delegated the initial work to the Library
Evolution Working Group.

Ultimately, all issues should reach closure with one of the following statuses.

DR - (Defect Report) - The full WG21/PL22.16
committee has voted to forward the issue to the Project Editor to be
processed as a Potential Defect Report. The Project Editor reviews
the issue, and then forwards it to the WG21 Convenor, who returns it
to the full committee for final disposition. This issues list
accords the status of DR to all these Defect Reports regardless of
where they are in that process.

WP - (Working Paper) - The proposed resolution has not been
accepted as a Technical Corrigendum, but the full WG21/PL22.16 committee has voted to
apply the Defect Report's Proposed Resolution to the working paper.

C++17 - (C++ Standard, as revised for 2017) - The full
WG21/PL22.16 committee has voted to accept the Defect Report's Proposed Resolution into
the published 2017 revision to the C++ standard, ISO/IEC IS 14882:2017(E).

C++14 - (C++ Standard, as revised for 2014) - The full
WG21/PL22.16 committee has voted to accept the Defect Report's Proposed Resolution into
the published 2014 revision to the C++ standard, ISO/IEC IS 14882:2014(E).

C++11 - (C++ Standard, as revised for 2011) - The full
WG21/PL22.16 committee has voted to accept the Defect Report's Proposed Resolution into
the published 2011 revision to the C++ standard, ISO/IEC IS 14882:2011(E).

TC1 - (Technical Corrigenda 1) - The full
WG21/PL22.16 committee has voted to accept the Defect Report's Proposed
Resolution as a Technical Corrigenda. Action on this issue is thus
complete and no further action is possible under ISO rules.

TRDec - (Decimal TR defect) - The LWG has voted to
accept the Defect Report's Proposed Resolution into the Decimal TR. Action on this
issue is thus complete and no further action is expected.

TS - (TS - various) - The full
WG21/PL22.16 committee has voted to accept the Defect Report's Proposed
Resolution into a published Technical Specification.

Resolved - The LWG has reached consensus
that the issue is a defect in the Standard, but the resolution adopted to
resolve the issue came via some other mechanism than this issue in the
list - typically by applying a formal paper, occasionally as a side effect
of consolidating several interacting issue resolutions into a single issue.

Dup - The LWG has reached consensus that
the issue is a duplicate of another issue, and will not be further
dealt with. A Rationale identifies the duplicated issue's
issue number.

NAD - The LWG has reached consensus that
the issue is not a defect in the Standard.

NAD Editorial - The LWG has reached consensus that
the issue can either be handled editorially, or is handled by a paper (usually
linked to in the rationale).

Tentatively - This is a status qualifier. The issue has
been reviewed online, or at an unofficial meeting, but not in an official meeting, and
some support has been formed for the qualified status. Tentatively qualified issues may
be moved to the unqualified status and forwarded to full committee (if Ready) within the
same meeting. Unlike Ready issues, Tentatively Ready issues will be reviewed in
subcommittee prior to forwarding to full committee. When a status is qualified with
Tentatively, the issue is still considered active.

Pending - This is a status qualifier. When prepended to a status this
indicates the issue has been processed by the committee, and a decision has been made to
move the issue to the associated unqualified status. However for logistical reasons the
indicated outcome of the issue has not yet appeared in the latest working paper.

The following statuses have been retired, but may show up on older issues lists.

NAD Future - In addition to the regular status, the
LWG believes that this issue should be revisited at the next revision of the standard.
That is now an ongoing task managed by the Library Evolution Working Group, and most
issues in this status were reopended with the status LEWG.

NAD Concepts - This status reflects an evolution
of the language during the development of C++11, where a new feature entered the
language, called concepts, that fundamentally changed the way templates would
be specified and written. While this language feature was removed towards the end of
the C++11 project, there is a clear intent to revisit this part of the language design.
During that development, a number of issues were opened against the updated library
related to use of that feature, or requesting fixes that would require explicit use of
the concepts feature. All such issues have been closed with this status, and may be
revisitted should this or a similar language feature return for a future standard.

NAD Arrays - This status reflects an evolution
of the language during the development of C++14/17, where work on a Technical
Specification, called the Arrays TS was begun. In early 2016, this work was
abandoned, and the work item was officially withdrawn. During development of the TS,
a number of issues were opened the features in the TS. All such issues have been closed
with this status, and may be revisitted should this or a similar language feature
return for a future standard.

Issues are always given the status of New when
they first appear on the issues list. They may progress to
Open or Review
while the LWG is actively working on them. When the LWG has reached consensus on
the disposition of an issue, the status will then change to
Dup, NAD, or
Ready as appropriate. Once the full PL22.16 committee
votes to forward Ready issues to the Project Editor, they are given the status of Defect
Report (DR). These in turn may become the basis for
Technical Corrigenda (TC1), an updated standard
(C++11, C++14),
or are closed without action other than a Record of Response
(Resolved) where the desired effect has already
been achieved by some other process. The intent of this LWG process is that only issues
which are truly defects in the Standard move to the formal ISO DR status.

A third party test suite tries to exercise istream::ignore(N) with
a negative value of N and expects that the implementation will treat
N as if it were 0. Our implementation asserts that (N >= 0) holds and
aborts the test.

I can't find anything in section 27 that prohibits such values but I don't
see what the effects of such calls should be, either (this applies to
a number of unformatted input functions as well as some member functions
of the basic_streambuf template).

I propose that we add to each function in clause 27 that takes an argument,
say N, of type streamsize a Requires clause saying that "N >= 0." The intent
is to allow negative streamsize values in calls to precision() and width()
but disallow it in calls to streambuf::sgetn(), istream::ignore(), or
ostream::write().

[Kona: The LWG agreed that this is probably what we want. However, we
need a review to find all places where functions in clause 27 take
arguments of type streamsize that shouldn't be allowed to go
negative. Martin will do that review.]

I note that given an input iterator a for type T,
then *a only has to be "convertable to T",
not actually of type T.

Firstly, I can't seem to find an exact definition of "convertable to T".
While I assume it is the obvious definition (an implicit conversion), I
can't find an exact definition. Is there one?

Slightly more worryingly, there doesn't seem to be any restriction on
the this type, other than it is "convertable to T". Consider two input
iterators a and b. I would personally assume that most people would
expect *a==*b would perform T(*a)==T(*b), however it doesn't seem that
the standard requires that, and that whatever type *a is (call it U)
could have == defined on it with totally different symantics and still
be a valid inputer iterator.

Is this a correct reading? When using input iterators should I write
T(*a) all over the place to be sure that the object I'm using is the
class I expect?

This is especially a nuisance for operations that are defined to be
"convertible to bool". (This is probably allowed so that
implementations could return say an int and avoid an unnessary
conversion. However all implementations I have seen simply return a
bool anyway. Typical implemtations of STL algorithms just write
things like while(a!=b && *a!=0). But strictly
speaking, there are lots of types that are convertible to T but
that also overload the appropriate operators so this doesn't behave
as expected.

If we want to make code like this legal (which most people seem to
expect), then we'll need to tighten up what we mean by "convertible
to T".

[Lillehammer: The first part is NAD, since "convertible" is
well-defined in core. The second part is basically about pathological
overloads. It's a minor problem but a real one. So leave open for
now, hope we solve it as part of iterator redesign.]

[
2009-07-28 Reopened by Alisdair. No longer solved by concepts.
]

[
2009-10 Santa Cruz:
]

Mark as NAD Future. We agree there's an issue, but there is no
proposed solution at this time and this will be solved by concepts in
the future.

[2017-02 in Kona, LEWG recommends NAD]

Has been clarified by 14. By design. Ranges might make it go away.
Current wording for input iterators is more constrained.

A problem with TR1 regex is currently being discussed on the Boost
developers list. It involves the handling of case-insensitive matching
of character ranges such as [Z-a]. The proper behavior (according to the
ECMAScript standard) is unimplementable given the current specification
of the TR1 regex_traits<> class template. John Maddock, the author of
the TR1 regex proposal, agrees there is a problem. The full discussion
can be found at http://lists.boost.org/boost/2005/06/28850.php (first
message copied below). We don't have any recommendations as yet.

-- Begin original message --

The situation of interest is described in the ECMAScript specification
(ECMA-262), section 15.10.2.15:

"Even if the pattern ignores case, the case of the two ends of a range
is significant in determining which characters belong to the range.
Thus, for example, the pattern /[E-F]/i matches only the letters E, F,
e, and f, while the pattern /[E-f]/i matches all upper and lower-case
ASCII letters as well as the symbols [, \, ], ^, _, and `."

A more interesting case is what should happen when doing a
case-insentitive match on a range such as [Z-a]. It should match z, Z,
a, A and the symbols [, \, ], ^, _, and `. This is not what happens with
Boost.Regex (it throws an exception from the regex constructor).

The tough pill to swallow is that, given the specification in TR1, I
don't think there is any effective way to handle this situation.
According to the spec, case-insensitivity is handled with
regex_traits<>::translate_nocase(CharT) -- two characters are equivalent
if they compare equal after both are sent through the translate_nocase
function. But I don't see any way of using this translation function to
make character ranges case-insensitive. Consider the difficulty of
detecting whether "z" is in the range [Z-a]. Applying the transformation
to "z" has no effect (it is essentially std::tolower). And we're not
allowed to apply the transformation to the ends of the range, because as
ECMA-262 says, "the case of the two ends of a range is significant."

So AFAICT, TR1 regex is just broken, as is Boost.Regex. One possible fix
is to redefine translate_nocase to return a string_type containing all
the characters that should compare equal to the specified character. But
this function is hard to implement for Unicode, and it doesn't play nice
with the existing ctype facet. What a mess!

-- End original message --

[
John Maddock adds:
]

One small correction, I have since found that ICU's regex package does
implement this correctly, using a similar mechanism to the current
TR1.Regex.

Given an expression [c1-c2] that is compiled as case insensitive it:

Enumerates every character in the range c1 to c2 and converts it to it's
case folded equivalent. That case folded character is then used a key to a
table of equivalence classes, and each member of the class is added to the
list of possible matches supported by the character-class. This second step
isn't possible with our current traits class design, but isn't necessary if
the input text is also converted to a case-folded equivalent on the fly.

ICU applies similar brute force mechanisms to character classes such as
[[:lower:]] and [[:word:]], however these are at least cached, so the impact
is less noticeable in this case.

Quick and dirty performance comparisons show that expressions such as
"[X-\\x{fff0}]+" are indeed very slow to compile with ICU (about 200 times
slower than a "normal" expression). For an application that uses a lot of
regexes this could have a noticeable performance impact. ICU also has an
advantage in that it knows the range of valid characters codes: code points
outside that range are assumed not to require enumeration, as they can not
be part of any equivalence class. I presume that if we want the TR1.Regex
to work with arbitrarily large character sets enumeration really does become
impractical.

Finally note that Unicode has:

Three cases (upper, lower and title).
One to many, and many to one case transformations.
Character that have context sensitive case translations - for example an
uppercase sigma has two different lowercase forms - the form chosen depends
on context(is it end of a word or not), a caseless match for an upper case
sigma should match either of the lower case forms, which is why case folding
is often approximated by tolower(toupper(c)).

Probably we need some way to enumerate character equivalence classes,
including digraphs (either as a result or an input), and some way to tell
whether the next character pair is a valid digraph in the current locale.

Where possible, tuple comparison operators <,<=,=>, and > ought to be
defined in terms of std::less rather than operator<, in order to
support comparison of tuples of pointers.

[
2009-07-28 Reopened by Alisdair. No longer solved by concepts.
]

[
2009-10 Santa Cruz:
]

If we solve this for tuple we would have to solve it for pair
algorithms, etc. It is too late to do that at this time. Move to NAD Future.

Proposed resolution:

change 6.1.3.5/5 from:

Returns: The result of a lexicographical comparison between t and
u. The result is defined as: (bool)(get<0>(t) < get<0>(u)) ||
(!(bool)(get<0>(u) < get<0>(t)) && ttail < utail), where rtail for
some tuple r is a tuple containing all but the first element of
r. For any two zero-length tuples e and f, e < f returns false.

to:

Returns: The result of a lexicographical comparison between t and
u. For any two zero-length tuples e and f, e < f returns false.
Otherwise, the result is defined as: cmp( get<0>(t), get<0>(u)) ||
(!cmp(get<0>(u), get<0>(t)) && ttail < utail), where rtail for some
tuple r is a tuple containing all but the first element of r, and
cmp(x,y) is an unspecified function template defined as follows.

Where T is the type of x and U is the type of y:

if T and U are pointer types and T is convertible to U, returns
less<U>()(x,y)

otherwise, if T and U are pointer types, returns less<T>()(x,y)

otherwise, returns (bool)(x < y)

[
Berlin: This issue is much bigger than just tuple (pair, containers,
algorithms). Dietmar will survey and work up proposed wording.
]

introduces extensions to the C locale mechanism that
allow multiple concurrent locales to be used in the same application
by introducing a type locale_t that is very similar to
std::locale, and a number of _l functions that make use of it.

The global locale (set by setlocale) is now specified to be per-
process. If a thread does not call uselocale, the global locale is
in effect for that thread. It can install a per-thread locale by
using uselocale.

There is also a nice querylocale mechanism by which one can obtain
the name (such as "de_DE") for a specific facet, even for combined
locales, with no std::locale equivalent.

std::locale should be harmonized with the new POSIX locale_t
mechanism and provide equivalents for uselocale and querylocale.

[
Kona (2007): Bill and Nick to provide wording.
]

[
San Francisco: Bill and Nick still intend to provide wording, but this
is a part of the task to be addressed by the group that will look into
issue 860.
]

[
2009-07 Frankfurt:
]

It's our intention to stay in sync with WG14. If WG14 makes a decision
that requires a change in WG21 the issue will be reopened.

Each of the three clocks specified in Clocks 23.17.7 [time.clock]
provides the member function:

static time_point now();

The semantics specified by Clock requirements 23.17.3 [time.clock.req]
make no mention of error handling. Thus the function may throw bad_alloc
or an implementation-defined exception (20.5.5.12 [res.on.exception.handling]
paragraph 4).

Some implementations of these functions on POSIX, Windows, and
presumably on other operating systems, may fail in ways only detectable
at runtime. Some failures on Windows are due to supporting chipset
errata and can even occur after successful calls to a clock's now()
function.

These functions are used in cases where exceptions are not appropriate
or where the specifics of the exception or cause of error need to be
available to the user. See
N2828,
Library Support for hybrid error
handling (Rev 1), for more specific discussion of use cases. Thus some change in
the interface of now is required.

The proposed resolution has been implemented in the Boost version of the
chrono library. No problems were encountered.

[
Batavia (2009-05):
]

We recommend this issue be deferred until the next Committee Draft
has been issued and the prerequisite paper has been accepted.

Move to Open.

[
2009-10 Santa Cruz:
]

Mark as NAD future. Too late to make this change without having already
accepted the hybrid error handling proposal.

-2- In Table 55 C1 and C2 denote clock types. t1 and
t2 are values returned by C1::now() where the call
returning t1 happens before (1.10) the call returning t2 and
both of these calls happen before C1::time_point::max().
ec denotes an object of type error_code
(22.5.3.1 [syserr.errcode.overview]).

33.4.3 [thread.mutex.requirements] describes the requirements for a type to be
a "Mutex type". A Mutex type can be used as the template argument for
the Lock type that's passed to condition_variable_any::wait (although
Lock seems like the wrong name here, since Lock is given a different
formal meaning in 33.4.4 [thread.lock]) and, although the WD doesn't quite say
so, as the template argument for lock_guard and unique_lock.

The requirements for a Mutex type include:

m.lock() shall be well-formed and have [described] semantics, including a return type of void.

m.try_lock() shall be well-formed and have [described] semantics, including a return type of bool.

m.unlock() shall be well-formed and have [described] semantics, including a return type of void.

Also, a Mutex type "shall not be copyable nor movable".

The latter requirement seems completely irrelevant, and the three
requirements on return types are tighter than they need to be. For
example, there's no reason that lock_guard can't be instantiated with a
type that's copyable. The rule is, in fact, that lock_guard, etc. won't
try to copy objects of that type. That's a constraint on locks, not on
mutexes. Similarly, the requirements for void return types are
unnecessary; the rule is, in fact, that lock_guard, etc. won't use any
returned value. And with the return type of bool, the requirement should
be that the return type is convertible to bool.

[
Summit:
]

Move to open. Related to conceptualization and should probably be tackled as part of that.

The intention is not only to place a constraint on what types such as
lock_guard may do with mutex types, but on what any code, including user
code, may do with mutex types. Thus the constraints as they are apply to
the mutex types themselves, not the current users of mutex types in the
standard.

This is a low priority issue; the wording as it is may be overly
restrictive but this may not be a real issue.

[
Post Summit Anthony adds:
]

Section 33.4.3 [thread.mutex.requirements] conflates the
requirements on a generic Mutex type (including user-supplied mutexes)
with the requirements placed on the standard-supplied mutex types in an
attempt to group everything together and save space.

When applying concepts to chapter 30, I suggest that the concepts
Lockable and TimedLockable embody the requirements for
*use* of a mutex type as required by
unique_lock/lock_guard/condition_variable_any. These should be
relaxed as Pete describes in the issue. The existing words in 33.4.3 [thread.mutex.requirements] are requirements on all of
std::mutex, std::timed_mutex,
std::recursive_mutex and std::recursive_timed_mutex,
and should be rephrased as such.

33.4.3 [thread.mutex.requirements] describes required member
functions of mutex types, and requires that they throw exceptions under
certain circumstances. This is overspecified. User-defined types can
abort on such errors without affecting the operation of templates
supplied by standard-library.

[
Summit:
]

Move to open. Related to conceptualization and should probably be
tackled as part of that.

[
2009-10 Santa Cruz:
]

Would be OK to leave it as is for time constraints, could loosen later.

Mark as NAD Future.

[2017-03-01, Kona]

SG1: Agreement that we need a paper.

Proposed resolution:

1025(i). The library should provide more specializations for std::hash

The current specification for return value for reverse_iterator::operator->
will always be a true pointer type, but reverse_iterator supports proxy
iterators where the pointer type may be some kind of 'smart pointer'.

[
Summit:
]

move_iterator avoids this problem by returning a value of the wrapped
Iterator type.
study group formed to come up with a suggested resolution.

move_iterator solution shown in proposed wording.

[
2009-07 post-Frankfurt:
]

Howard to deconceptize. Move to Review after that happens.

[
2009-08-01 Howard deconceptized:
]

[
2009-10 Santa Cruz:
]

We can't think of any reason we can't just define reverse
iterator's pointer types to be the same as the underlying iterator's
pointer type, and get it by calling the right arrow directly.

There is a minor problem with the exposition-only declaration of the private
member deref_tmp which is modified in a const member function (and the
same problem occurs in the specification of operator*). The fix is to
make it a mutable member.

The more severe problem is that the resolution for some reasons
does not explain in the rationale why it was decided to differ from
the suggested fix (using deref_tmp instead of tmp) in the
[ 2009-10 Santa Cruz] comment:

this->deref_tmp = current;
--this->deref_tmp;
return this->deref_tmp;

combined with the change of

typedef typename iterator_traits<Iterator>::pointer pointer;

to

typedef Iterator pointer;

The problem of the agreed on wording is that the following rather
typical example, that compiled with the wording before 1052 had
been applied, won't compile anymore:

Thus the change will break valid, existing code based
on std::reverse_iterator.

IMO the suggestion proposed in the comment is a necessary fix, which harmonizes
with the similar specification of std::move_iterator and properly
reflects the recursive nature of the evaluation of operator->
overloads.

Suggested resolution:

In the class template reverse_iterator synopsis of 27.5.1.1 [reverse.iterator]
change as indicated:

We prefer to make to use a local variable instead of deref_tmp within
operator->(). And although this means that the mutable
change is no longer needed, we prefer to keep it because it is needed for
operator*() anyway.

Both add and multiply could sensibly be called with more than two arguments.
The variadic template facility makes such declarations simple, and is likely
to be frequently wrapped by end users if we do not supply the variant
ourselves.

We deliberately ignore divide at this point as it is not transitive.
Likewise, subtract places special meaning on the first argument so I do not
suggest extending that immediately. Both could be supported with analogous
wording to that for add/multiply below.

Note that the proposed resolution is potentially incompatible with that
proposed for 921, although the addition of the typedef to ratio would be
equally useful.

[
2009-10-30 Alisdair adds:
]

The consensus of the group when we reviewed this in Santa Cruz was that
921 would proceed to Ready as planned, and the
multi-paramater add/multiply templates should be renamed as
ratio_sum and ratio_product to avoid the problem
mixing template aliases with partial specializations.

It was also suggested to close this issue as NAD Future as it does not
correspond directly to any NB comment. NBs are free to submit a
specific comment (and re-open) in CD2 though.

Walter Brown also had concerns on better directing the order of
evaluation to avoid overflows if we do proceed for 0x rather than TR1,
so wording may not be complete yet.

[
Alisdair updates wording.
]

[
2009-10-30 Howard:
]

Moved to Tentatively NAD Future after 5 positive votes on c++std-lib.

Rationale:

Does not have sufficient support at this time. May wish to reconsider for a
future standard.

SG6 suggests this issue is a new feature, not a problem with the existing
standard, and should therefore be closed NAD. However, SG6 invites papers that bring
the proposal up to date with the current standard.

When I look at the unordered_* constructors, I think the complexity is poorly
described and does not follow the style of the rest of the standard.

The complexity for the default constructor is specified as constant.
Actually, it is proportional to n, but there are no invocations of
value_type constructors or other value_type operations.

For the iterator-based constructor the complexity should be:

Complexity: exactly n calls to construct value_type
from InputIterator::value_type (where n = distance(f,l)).
The number of calls to key_equal::operator() is proportional to
n in the average case and n*n in the worst case.

[
2010 Rapperswil:
]

Concern that the current wording may require O(1) where that cannot be delivered. We need to look at
both the clause 23 requirements tables and the constructor description of each unordered container to be sure.

Howard suggests NAD Editorial as we updated the container requirement tables since this issue was written.

Daniel offers to look deeper, and hopefully produce wording addressing any outstanding concerns at the next meeting.

Move to Open.

[2011-02-26: Daniel provides wording]

I strongly suggest to clean-up the differences between requirement tables and individual
specifications. In the usual way, the most specific specifications wins, which is in this
case the wrong one. In regard to the concern expressed about missing DefaultConstructible
requirements of the value type I disagree: The function argument n is no size-control
parameter, but only some effective capacity parameter: No elements will be value-initialized
by these constructors. The necessary requirement for the value type, EmplaceConstructible
into *this, is already listed in Table 103 — Unordered associative container requirements.
Another part of the proposed resolution is the fact that there is an inconsistency of the
complexity counting when both a range and a bucket count is involved compared
to constructions where only bucket counts are provided: E.g. the construction X a(n);
has a complexity of n bucket allocations, but this part of the work is omitted for
X a(i, j, n);, even though it is considerable larger (in the average case) for
n ≫ distance(i, j).

[2011-03-24 Madrid meeting]

Move to deferred

[
2011 Bloomington
]

The proposed wording looks good. Move to Review.

[2012, Kona]

Fix up some presentation issues with the wording, combining the big-O expressions into single
expressions rather than the sum of two separate big-Os.

Strike "constant or linear", prefer "linear in the number of buckets".
This allows for number of buckets being larger than requested n as well.

Default n to "unspecified" rather than "implementation-defined". It seems an un-necessary
burden asking vendors to document a quantity that is easily determined through the public API of
these classes.

Replace distance(f,l) with "number of elements in the range [f,l)"

Retain in Review with the updated wording

[2012, Portland: Move to Open]

The wording still does not call out Pablo's original concern, that the element constructor is called
no more than N times, and that the N squared term applies to moves during rehash.

Inconsistent use of O(n)+O(N) vs. O(n+N), with a preference for the former.

AJM to update wording with a reference to "no more than N element constructor calls".

Matt concerned that calling out the O(n) requirements is noise, and dangerous noise in suggesting a precision
we do not mean. The cost of constructing a bucket is very different to constructing an element of user-supplied
type.

AJM notes that if there are multiple rehashes, the 'n' complexity is probably not linear.

Matt suggests back to Open, Pablo suggests potentially NAD if we keep revisitting without achieving a resolution.

Matt suggests complexity we are concerned with is the number of operations, such as constructing elements, moving
nodes, and comparing/hashing keys. We are less concerned with constructing buckets, which are generally noise in
this bigger picture.

[2015-01-29 Telecon]

AM: essentially correct, but do we want to complicate the spec?

HH: Pablo has given us permission to NAD it

JM: when I look at the first change in the P/R I find it mildly disturbing that the existing wording says you have a
constant time constructor with a single element even if your n is 10^6, so I think adding this change makes people
aware there might be a large cost in initializing the hash table, even though it doesn't show up in user-visible constructions.

HH: one way to avoid that problem is make the default ctor noexcept. Then the container isn't allowed to create
an arbitrarily large hash table

AM: but this is the constructor where the user provides n

MC: happy with the changes, except I agree with the editorial recommendation to keep the two 𝒪s separate.

JW: yes, the constant 'k' is different in 𝒪(n) and 𝒪(N)

GR: do we want to talk about buckets at all

JM: yes, good to highlight that bucket construction might be a significant cost

HH: suggest we take the suggestion to split 𝒪(n+N) to 𝒪(n)+𝒪(N) and move to Tentatively Ready

GR: 23.2.1p2 says all complexity requirements are stated solely in terms of the number of operations on the contained
object, so we shouldn't be stating complexity in terms of the hash table initialization

VV: seem to be requesting modifications that render this not Tentatively Ready

GR: I think it can't be T/R

AM: make the editorial recommendation, consider fixing 23.2.1/3 to give us permission to state complexity in terms
of bucket initialization

HH: only set it to Review after we get new wording to review

[2015-02 Cologne]

Update wording, revisit later.

Proposed resolution:

Modify the following rows in Table 103 — Unordered associative container requirements to
add the explicit bucket allocation overhead of some constructions. As editorial recommendation it is
suggested not to shorten the sum 𝒪(n) + 𝒪(N) to
𝒪(n + N), because two different work units are involved.

1 Effects: Constructs an empty unordered_map using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_map.
max_load_factor() returns 1.0.

3 Effects: Constructs an empty unordered_map using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_map.
Then inserts elements from the range [f, l). max_load_factor() returns 1.0.

4 Complexity: Average case linear, worst case quadraticLinear in the number of buckets.
In the average case linear in N and in the worst case quadratic in N to insert
the elements, where N is equal to number of elements in the range [f,l).

1 Effects: Constructs an empty unordered_multimap using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_multimap.
max_load_factor() returns 1.0.

3 Effects: Constructs an empty unordered_multimap using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_multimap.
Then inserts elements from the range [f, l). max_load_factor() returns 1.0.

4 Complexity: Average case linear, worst case quadraticLinear in the number of buckets.
In the average case linear in N and in the worst case quadratic in N to insert
the elements, where N is equal to number of elements in the range [f,l).

1 Effects: Constructs an empty unordered_set using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_set.
max_load_factor() returns 1.0.

3 Effects: Constructs an empty unordered_set using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_set.
Then inserts elements from the range [f, l). max_load_factor() returns 1.0.

4 Complexity: Average case linear, worst case quadraticLinear in the number of buckets.
In the average case linear in N and in the worst case quadratic in N to insert
the elements, where N is equal to number of elements in the range [f,l).

1 Effects: Constructs an empty unordered_multiset using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_multiset.
max_load_factor() returns 1.0.

3 Effects: Constructs an empty unordered_multiset using the specified hash function, key equality function,
and allocator, and using at least n buckets. If n is not provided, the number of buckets is
unspecifiedimpldefdefault number of buckets in unordered_multiset.
Then inserts elements from the range [f, l). max_load_factor() returns 1.0.

4 Complexity: Average case linear, worst case quadraticLinear in the number of buckets.
In the average case linear in N and in the worst case quadratic in N to insert
the elements, where N is equal to number of elements in the range [f,l).

1188(i). Unordered containers should have a minimum load factor as well as a maximum

Unordered associative containers have a notion of a maximum load factor:
when the number of elements grows large enough, the containers
automatically perform a rehash so that the number of elements per bucket
stays below a user-specified bound. This ensures that the hash table's
performance characteristics don't change dramatically as the size
increases.

For similar reasons, Google has found it useful to specify a minimum
load factor: when the number of elements shrinks by a large enough, the
containers automatically perform a rehash so that the number of elements
per bucket stays above a user-specified bound. This is useful for two
reasons. First, it prevents wasting a lot of memory when an unordered
associative container grows temporarily. Second, it prevents amortized
iteration time from being arbitrarily large; consider the case of a hash
table with a billion buckets and only one element. (This was discussed
even before TR1 was published; it was TR issue 6.13, which the LWG
closed as NAD on the grounds that it was a known design feature.
However, the LWG did not consider the approach of a minimum load
factor.)

The only interesting question is when shrinking is allowed. In principle
the cleanest solution would be shrinking on erase, just as we grow on
insert. However, that would be a usability problem; it would break a
number of common idioms involving erase. Instead, Google's hash tables
only shrink on insert and rehash.

The proposed resolution allows, but does not require, shrinking in
rehash, mostly because a postcondition for rehash that involves the
minimum load factor would be fairly complicated. (It would probably have
to involve a number of special cases and it would probably have to
mention yet another parameter, a minimum bucket count.)

The current behavior is equivalent to a minimum load factor of 0. If we
specify that 0 is the default, this change will have no impact on
backward compatibility.

[
2010 Rapperswil:
]

This seems to a useful extension, but is too late for 0x.
Move to Tentatively NAD Future.

[
Moved to NAD Future at 2010-11 Batavia
]

Proposed resolution:

Add two new rows, and change rehash's postcondition in the unordered
associative container requirements table in 26.2.7 [unord.req]:

Returns a non-negative number that the container attempts to keep the
load factor greater than or equal to. The container automatically
decreases the number of buckets as necessary to keep the load factor
above this number.

constant

a.min_load_factor(z)

void

Pre: z shall be non-negative. Changes the container's minimum
load factor, using z as a hint. [Footnote: the minimum
load factor should be significantly smaller than the maximum.
If z is too large, the implementation may reduce it to a more sensible value.]

constant

a.rehash(n)

void

Post: a.bucket_count() >= n, and a.size() <= a.bucket_count()
* a.max_load_factor(). [Footnote: It is intentional that the
postcondition does not mention the minimum load factor.
This member function is primarily intended for cases where the user knows
that the container's size will increase soon, in which case the container's
load factor will temporarily fall below a.min_load_factor().]a.bucket_cout > a.size() / a.max_load_factor() and a.bucket_count()
>= n.

The insert members shall not affect the validity of references to
container elements, but may invalidate all iterators to the container.
The erase members shall invalidate only iterators and references to the
erased elements.

[A consequence of these requirements is that while insert may change the
number of buckets, erase may not. The number of buckets may be reduced
on calls to insert or rehash.]

Change paragraph 13:

The insert members shall not affect the validity of iterators if
(N+n) < z * Bzmin * B <= (N+n) <= zmax * B,
where N is the number of elements in
the container prior to the insert operation, n is the number of
elements inserted, B is the container's bucket count,
zmin is the container's minimum load factor,
and zmax is the container's maximum load factor.

Spotting a recent thread on the boost lists regarding collapsing
optional representations in optional<optional<T>> instances, I wonder if
we have some of the same issues with make_tuple, and now make_pair?

Essentially, if my generic code in my own library is handed a
reference_wrapper by a user, and my library in turn delegates some logic
to make_pair or make_tuple, then I am going to end up with a pair/tuple
holding a real reference rather than the intended reference wrapper.

There are two things as a library author I can do at this point:

document my library also has the same reference-wrapper behaviour as
std::make_tuple

roll my own make_tuple that does not unwrap rereferences, a lost
opportunity to re-use the standard library.

(There may be some metaprogramming approaches my library can use to wrap
the make_tuple call, but all will be significantly more complex than
simply implementing a simplified make_tuple.)

Now I don't propose we lose this library facility, I think unwrapping
references will be the common behaviour. However, we might want to
consider adding another overload that does nothing special with
ref-wrappers. Note that we already have a second overload of
make_tuple in the library, called tie.

[
2009-09-30 Daniel adds:
]

I suggest to change the currently proposed paragraph for
make_simple_pair

Type requirements:sizeof...(Types) == 2.Remarks: The program shall be ill-formed, if
sizeof...(Types) != 2.

...

or alternatively (but with a slightly different semantic):

Remarks: If sizeof...(Types) != 2, this function shall not
participate in overload resolution.

to follow a currently introduced style and because the library does
not have yet a specific "Type requirements" element. If such thing
would be considered as useful this should be done as a separate
issue. Given the increasing complexity of either of these wordings
it might be preferable to use the normal two-argument-declaration
style again in either of the following ways:

[
Draughting note: I chose a variadic representation similar to make_tuple
rather than naming both types as it is easier to read through the
clutter of metaprogramming this way. Given there are exactly two
elements, the committee may prefer to draught with two explicit template
type parameters instead
]

Add the following function to 23.5.3.4 [tuple.creation] and
signature in appropriate synopses:

This will store "i = 10" (for example) in the string s. Note
the need to cast the stream back to ostringstream& prior to using
the member .str(). This is necessary because the inserter has cast
the ostringstream down to a more generic ostream during the
insertion process.

I believe we can re-specify the rvalue-inserter so that this cast is unnecessary.
Thus our customer now has to only type:

std::string s = (std::ostringstream() << "i = " << i).str();

This is accomplished by having the rvalue stream inserter return an rvalue of
the same type, instead of casting it down to the base class. This is done by
making the stream generic, and constraining it to be an rvalue of a type derived
from ios_base.

The same argument and solution also applies to the inserter. This code has been
implemented and tested.

The terms valid iterator and singular aren't
properly defined. The fuzziness of those terms became even worse
after the resolution of 208 (including further updates by 278). In
27.2 [iterator.requirements] as of
N2723
the standard says now:

5 - These values are called past-the-end values. Values of an iterator i for
which the expression *i is defined are called dereferenceable. The library
never assumes that past-the-end values are dereferenceable. Iterators
can also have singular values that are not associated with any
container. [...] Results of most expressions are undefined for singular
values; the only exceptions are destroying an iterator that holds a
singular value and the assignment of a non-singular value to an iterator
that holds a singular value. [...] Dereferenceable values are always
non-singular.

10 - An invalid iterator is an iterator that may be singular.

First, issue 208 intentionally removed the earlier constraint that past-the-end
values are always non-singular. The reason for this was to support null
pointers as past-the-end iterators of e.g. empty sequences. But there
seem to exist different views on what a singular (iterator) value is. E.g.
according to the SGI definition
a null pointer is not a singular value:

Dereferenceable iterators are always nonsingular, but the converse is
not true.
For example, a null pointer is nonsingular (there are well defined operations
involving null pointers) even thought it is not dereferenceable.

Even if the standard prefers a different meaning of singular here, the
change was incomplete, because by restricting feasible expressions of singular
iterators to destruction and assignment isn't sufficient for a past-the-end
iterator: Of-course it must still be equality-comparable and in general be a readable value.

In all of the following algorithms, the formal template parameter ForwardIterator
is required to satisfy the requirements of a forward iterator (24.1.3)
[..], and is required to have the property that no exceptions are thrown from [..], or
dereference of valid iterators.

The standard should make better clear what "singular pointer" and "valid
iterator" means. The fact that the meaning of a valid value
has a core language meaning doesn't imply that for an iterator concept
the term "valid iterator" has the same meaning.

Let me add a final example: In 99 [allocator.concepts.members] of
N2914
we find:

pointer X::allocate(size_type n);

11 Returns: a pointer to the allocated memory. [Note: if n == 0, the return
value is unspecified. —end note]

[..]

void X::deallocate(pointer p, size_type n);

Preconditions:p shall be a non-singular pointer value obtained from a call
to allocate() on this allocator or one that compares equal to it.

If singular pointer value would include null pointers this make the
preconditions
unclear if the pointer value is a result of allocate(0): Since the return value
is unspecified, it could be a null pointer. Does that mean that programmers
need to check the pointer value for a null value before calling deallocate?

Generally accepted mathematical semantics of such a construct correspond
to quaternions through Cayly-Dickson construct

(w+xi) + (y+zi) j

The proper implementation seems straightforward by adding a few
declarations like those below. I have included operator definition for
combining real scalars and complex types, as well, which seems
appropriate, as algebra of complex numbers allows mixing complex and
real numbers with operators. It also allows for constructs such as
complex<double> i=(0,1), x = 12.34 + 5*i;

Quaternions are often used in areas such as computer graphics, where,
for example, they avoid the problem of Gimbal lock when rotating objects
in 3D space, and can be more efficient than matrix multiplications,
although I am applying them to a different field.

SG6 suggests this issue is a new feature, not a problem with the existing
standard, and should therefore be closed NAD. However, SG6 invites papers that bring
the proposal up to date with the current standard.

There exist optimized, vectorized vendor libraries for the creation of
random number generators, such as Intel's MKL [1] and AMD's ACML [2]. In
timing tests we have seen a performance gain of a factor of up to 80
(eighty) compared to a pure C++ implementation (in Boost.Random) when
using these generator to generate a sequence of normally distributed
random numbers. In codes dominated by the generation of random numbers
(we have application codes where random number generation is more than
50% of the CPU time) this factor 80 is very significant.

To make use of these vectorized generators, we use a C++ class modeling
the RandomNumberEngine concept and forwarding the generation of random
numbers to those optimized generators. For example:

namespace mkl {
class mt19937 {.... };
}

For the generation of random variates we also want to dispatch to
optimized vectorized functions in the MKL or ACML libraries. See this
example:

Since the variate generation is done through the operator() of the
distribution there is no customization point to dispatch to Intel's or
AMD's optimized functions to generate normally distributed numbers based
on the mt19937 generator. Hence, the performance gain of 80 cannot be
achieved.

A similar customization point is missing in the C++0x design and
prevents the optimized vectorized version to be used.

Suggested resolution:

Add a customization point to the distribution concept. Instead of the
variate_generator template this can be done through a call to a
free function generate_variate found by ADL instead of
operator() of the distribution:

The library has many algorithms that take a source range represented by
a pair of iterators, and the start of some second sequence given by a
single iterator. Internally, these algorithms will produce undefined
behaviour if the second 'range' is not as large as the input range, but
none of the algorithms spell this out in Requires clauses, and there is
no catch-all wording to cover this in clause 17 or the front matter of
25.

There was an attempt to provide such wording in paper
n2944
but this
seems incidental to the focus of the paper, and getting the wording of
this issue right seems substantially more difficult than the simple
approach taken in that paper. Such wording will be removed from an
updated paper, and hopefully tracked via the LWG issues list instead.

It seems there are several classes of problems here and finding wording
to solve all in one paragraph could be too much. I suspect we need
several overlapping requirements that should cover the desired range of
behaviours.

Motivating examples:

A good initial example is the swap_ranges algorithm. Here there is a
clear requirement that first2 refers to the start of a valid range at
least as long as the range [first1, last1). n2944 tries to solve this
by positing a hypothetical last2 iterator that is implied by the
signature, and requires distance(first2,last2) < distance(first1,last1).
This mostly works, although I am uncomfortable assuming that last2 is
clearly defined and well known without any description of how to obtain
it (and I have no idea how to write that).

A second motivating example might be the copy algorithm. Specifically,
let us image a call like:

In this case, our input iterators are literally simple InputIterators,
and the destination is a simple OutputIterator. In neither case am I
happy referring to std::distance, in fact it is not possible for the
ostream_iterator at all as it does not meet the requirements. However,
any wording we provide must cover both cases. Perhaps we might deduce
last2 == ostream_iterator<int>{}, but that might not always be valid for
user-defined iterator types. I can well imagine an 'infinite range'
that writes to /dev/null and has no meaningful last2.

The motivating example in n2944 is std::equal, and that seems to fall somewhere between the
two.

Outlying examples might be partition_copy that takes two output
iterators, and the _n algorithms where a range is specified by a
specific number of iterations, rather than traditional iterator pair.
We should also not accidentally apply inappropriate constraints to
std::rotate which takes a third iterator that is not intended to be a
separate range at all.

I suspect we want some wording similar to:

For algorithms that operate on ranges where the end iterator of the
second range is not specified, the second range shall contain at least
as many elements as the first.

I don't think this quite captures the intent yet though. I am not sure
if 'range' is the right term here rather than sequence. More awkwardly,
I am not convinced we can describe an Output sequence such as produce by
an ostream_iterator as "containing elements", at least not as a
precondition to the call before they have been written.

Another idea was to describe require that the trailing iterator support
at least distance(input range) applications of operator++ and may be
written through the same number of times if a mutable/output iterator.

We might also consider handling the case of an output range vs. an input
range in separate paragraphs, if that simplifies how we describe some of
these constraints.

[
2009-11-03 Howard adds:
]

Moved to Tentatively NAD Future after 5 positive votes on c++std-lib.

Rationale:

Does not have sufficient support at this time. May wish to reconsider for a
future standard.

Splitting strings into parts by some set of delimiters is an often task, but
there is no simple and generalized solution in C++ Standard. Usually C++
developers use std::basic_stringstream<> to split string into
parts, but there are several inconvenient restrictions:

we cannot explicitly assign the set of delimiters;

this approach is suitable only for strings, but not for other types of
containers;

we have (possible) performance leak due to string instantiation.

Impact on the Standard

This algorithm doesn't interfere with any of current standard algorithms.

Design Decisions

This algorithm is implemented in terms of input/output iterators. Also, there is
one additional wrapper for const CharType * specified delimiters.

1. Effects: splits the range [first, last) into parts, using any
element of [delimiter_first, delimiter_last) as a delimiter. Results
are pushed to output iterator in the form of std::pair<ForwardIterator1,
ForwardIterator1>. Each of these pairs specifies a maximal subrange of
[first, last) which does not contain a delimiter.

1. Effects: split the range [first, last) into parts, using any
element of delimiters (interpreted as zero-terminated string) as a
delimiter. Results are pushed to output iterator in the form of
std::pair<ForwardIterator1, ForwardIterator1>. Each of these
pairs specifies a maximal subrange of [first, last) which does not
contain a delimiter.

To achieve this expression, a smart pointer writer must introduce an explicit
conversion operator from smart_ptr<void> to
smart_ptr<T> so that
static_cast<pointer>(void_ptr) is a valid expression.
Unfortunately this explicit conversion weakens the safety of a smart pointer
since the following expression (invalid for raw pointers) would become valid:

smart_ptr<void> smart_v = ...;
smart_ptr<T> smart_t(smart_v);

On the other hand, shared_ptr also defines its own casting functions in
23.11.3.9 [util.smartptr.shared.cast], and although it's unlikely that a
programmer will use shared_ptr as allocator::pointer, having
two different ways to do the same cast operation does not seem reasonable. A
possible solution would be to replace static_cast<X::pointer>(w)
expression with a user customizable (via ADL)
static_pointer_cast<value_type>(w), and establish the
xxx_pointer_cast functions introduced by shared_ptr as the
recommended generic casting utilities of the standard.

8 ...But when a function template with explicit template arguments is used, the
call does not have the correct syntactic form unless there is a function
template with that name visible at the point of the call. If no such name is
visible, the call is not syntactically well-formed and argument-dependent lookup
does not apply.

The solution to make static_pointer_cast a customization point is to
add a generic declaration (no definition) of static_pointer_cast in a
namespace (like std) and apply "using
std::static_pointer_cast" declaration to activate ADL:

Currently, the library lacks a convenient way to provide a hash function that
can be used with the provided unordered containers to allow the usage of non
trivial element types.

While we can easily declare an

std::unordered_set<int>

or

std::unordered_set<std::string>

we have no easy way to declare an unordered_set for a user defined
type. IMO, this is a big obstacle to use unordered containers in practice. Note
that in Java, the wide usage of HashMap is based on the fact that there
is always a default hash function provided.

Of course, a default hash function implies the risk to provide poor hash
functions. But often even poor hash functions are good enough.

While I really would like to see a default hash function, I don't propose it
here because this would probably introduce a discussion that's too big for this
state of C++0x.

However, I strongly suggest at least to provide a convenience variadic template
function make_hash<>() to allow an easy definition of a (possibly
poor) hash function.

Hash support based on ownership sharing should be
supplied for shared_ptr and weak_ptr.
For two shared_ptr objects p and q, two distinct
equivalence relations can be defined. One is based on
equivalence of pointer values, which is derived from the
expression p.get() == q.get() (hereafter called address based
equivalence relation), the other is based on
equivalence of ownership sharing, which is derived from
the expression !p.owner_before(q) && !q.owner_before(p)
(hereafter called ownership-based equivalence relation).
These two equivalence relations are independent in
general. For example, a shared_ptr object created by the
constructor of the signature shared_ptr(shared_ptr<U>
const &, T *) could reveal a difference between these two
relations. Therefore, hash support based on each
equivalence relation should be supplied for shared_ptr.
However, while the standard library provides the hash
support for address-based one (20.9.11.6 paragraph 2), it
lacks the hash support for ownership-based one. In
addition, associative containers work well in combination
with the shared_ptr's ownership-based comparison but
unordered associative containers don't. This is
inconsistent.

For the case of weak_ptr, hash support for the ownership based
equivalence relation can be safely defined on
weak_ptrs, and even on expired ones. The absence of
hash support for the ownership-based equivalence
relation is fatal, especially for expired weak_ptrs. And the
absence of such hash support precludes some quite
effective use-cases, e.g. erasing the unordered_map entry
of an expired weak_ptr key from a customized deleter
supplied to shared_ptrs.

Hash support for the ownership-based equivalence
relation cannot be provided by any user-defined manner
because information about ownership sharing is not
available to users at all. Therefore, the only way to provide
ownership-based hash support is to offer it intrusively by
the standard library.

As far as we know, such hash support is implementable.
Typical implementation of such hash function could return
the hash value of the pointer of the counter object that is
internally managed by shared_ptr and weak_ptr.

[2010 Rapperswil:]

No consensus to make this change at this time.

Proposed resolution:

Add the following non-static member functions to
shared_ptr and weak_ptr class template;

These functions satisfy the following
requirements. Let p and q be objects of either
shared_ptr or weak_ptr, H be a hypothetical
function object type that satisfies the hash
requirements ([hash.requirements], 20.2.4) and h be an object of the
type H. The expression p.owner_hash() behaves
as if it were equivalent to the expression h(p). In
addition, h(p) == h(q) must become true if p and
q share ownership.

vector<bool> iterators are not random access iterators
because their reference type is a special class, and not
bool &. All standard libary operations taking iterators
should treat this iterator as if it was a random access iterator, rather
than a simple input iterator.

[
Resolution proposed in ballot comment
]

Either revise the iterator requirements to support proxy iterators
(restoring functionality that was lost when the Concept facility was
removed) or add an extra paragraph to the vector<bool>
specification requiring the library to treat vector<bool>
iterators as-if they were random access iterators, despite having the wrong
reference type.

[
Rapperswil Review
]

The consensus at Rapperswil is that it is too late for full support for
proxy iterators, but requiring the library to respect vector<bool>
iterators as-if they were random access would be preferable to flagging
this container as deliberately incompatible with standard library algorithms.

Alisdair to write the note, which may become normative Remark depending
on the preferences of the project editor.

[
Post-Rapperswil Alisdair provides wording
]

Initial wording is supplied, deliberately using Note in preference to
Remark although the author notes his preference for Remark. The
issue of whether iterator_traits<vector<bool>>::iterator_category
is permitted to report random_access_iterator_tag or must report
input_iterator_tag is not addressed.

[Note All functions in the library that take a pair of iterators to
denote a range shall treat vector<bool> iterators as-if they were
random access iterators, even though the reference type is not a
true reference.-- end note]

[
2010-11 Batavia:
]

Closed as NAD Future, because the current iterator categories cannot correctly describe
vector<bool>::iterator. But saying that they are Random Access Iterators
is also incorrect, because it is not too hard to create a corresponding test that fails.
We should deal with the more general proxy iterator problem in the future, and see no
benefit to take a partial workaround specific to vector<bool> now.

[2017-02 in Kona, LEWG recommends NAD]

D0022 Proxy Iterators for the Ranges Extensions -
as much a fix as we’re going to get for vector<bool>.

[2017-06-02 Issues Telecon]

P0022 is exploring a resolution.
We consider this to be fairly important issue

An atomic store shall only store a value that has
been computed from constants and program input values
by a finite sequence of program evaluations, such
that each evaluation observes the values of variables
as computed by the last prior assignment in the
sequence.

If A is not sequenced before B and B is not
sequenced before A, then A and B are unsequenced.
[ Note: The execution of unsequenced
evaluations can overlap. — end note ]

Overlapping executions can make it impossible to
construct the sequence described in 32.5 [atomics.lockfree] p.8. We are not
sure of the intention here and do not offer a suggestion for
change, but note that 32.5 [atomics.lockfree] p.8 is the condition that prevents
out-of-thin-air reads.

For an example, suppose we have a function invocation
f(e1,e2). The evaluations of e1 and e2 can overlap.
Suppose that the evaluation of e1 writes y and reads x
whereas the evaluation of e2 reads y and writes x, with
reads-from edges as below (all this is within a single
thread).

e1 e2
Wrlx y-- --Wrlx x
rf\ /rf
X
/ \
Rrlx x<- ->Rrlx y

This seems like it should be allowed, but there seems to
be no way to produce a sequence of evaluations with the
property above.

In more detail, here the two evaluations, e1 and e2, are
being executed as the arguments of a function and are
consequently not sequenced-before each other. In
practice we'd expect that they could overlap (as allowed
by 6.8.1 [intro.execution] p.13), with the two writes taking effect before the two
reads. However, if we have to construct a linear order of
evaluations, as in 32.5 [atomics.lockfree] p.8, then the execution above is not
permited. Is that really intended?

[
Resolution proposed by ballot comment
]

Please clarify.

[2011-03-09 Hans comments:]

I'm not proud of 32.4 [atomics.order] p9 (formerly p8), and I agree with the comments that this
isn't entirely satisfactory. 32.4 [atomics.order] p9 was designed to preclude
out-of-thin-air results for races among memory_order_relaxed atomics, in spite of
the fact that Java experience has shown we don't really know how to do that adequately. In
the long run, we probably want to revisit this.

However, in the short term, I'm still inclined to declare this NAD, for two separate reasons:

6.8.1 [intro.execution] p15 states: "If a side effect on a scalar
object is unsequenced relative to either another side
effect on the same scalar object or a value computation
using the value of the same scalar object, the behavior is undefined."
I think the examples presented here have undefined behavior as a result.
It's not completely clear to me whether examples can be constructed
that exhibit this problem, and don't have undefined behavior.

This comment seems to be using a different meaning of "evaluation"
from what is used elsewhere in the standard. The sequence of evaluations
here doesn't have to consist of full expression evaluations. They
can be evaluations of operations like lvalue to rvalue conversion,
or individual assignments. In particular, the reads and writes
executed by e1 and e2 in the example could be treated as separate
evaluations for purposes of producing the sequence.
The definition of "sequenced before" in 6.8.1 [intro.execution] makes
little sense if the term "evaluation" is restricted to any notion
of complete expression. Perhaps we should add yet another note
to clarify this? 32.4 [atomics.order] p10 probably leads to
the wrong impression here.

An alternative resolution would be to simply delete our flakey
attempt at preventing out-of-thin-air reads, by removing 32.4 [atomics.order] p9-11,
possibly adding a note that explains that we technically allow,
but strongly discourage them. If we were starting this from scratch
now, that would probably be my preference. But it seems like too drastic
a resolution at this stage.

Add join_for and join_until. Or decide one should
never join a thread with a timeout since pthread_join doesn't have a
timeout version.

[
2010 Batavia
]

The concurrency working group deemed this an extension beyond the scope of C++0x.

Rationale:

The LWG does not wish to make a change at this time.

[2017-03-01, Kona]

SG1 recommends: Close as NAD

There has not been much demand for it, and it would usually be difficult to deal with thread_local destructor races.
It can be approximated with a condition variable wait followed by an unconditional join. Adding it would create
implementation issues on Posix. As always, this may be revisited if we have a paper exploring the issues in detail.

Proposed resolution:

1488(i). Improve interoperability between the C++0x and C1x threads APIs

Cooperate with WG14 to improve interoperability between
the C++0x and C1x threads APIs. In particular, C1x
mutexes should be conveniently usable with a C++0xlock_guard. Performance overheads for this combination
should be considered.

[
Resolution proposed by ballot comment:
]

Remove C++0xtimed_mutex and
timed_recursive_mutex if that facilitates
development of more compatible APIs.

[
2010 Batavia
]

The concurrency sub-group reviewed the options, and decided that closer harmony should wait until both standards are published.

Rationale:

The LWG does not wish to make any change at this time.

[2017-03-01, Kona]

SG1 recommends: Close as NAD

Papers about C compatibility are welcome, but there may be more pressing issues. C threads are not consistently available
at this point, so there seems to be little demand to fix this particular problem.

mutex and recursive_mutex should have an is_locked()
member function. is_locked allows a user to test a lock
without acquiring it and can be used to implement a lightweight
try_try_lock.

[
Resolution proposed by ballot comment:
]

Add a member function:

bool is_locked() const;

to std::mutex and std::recursive_mutex. These
functions return true if the current thread would
not be able to obtain a mutex. These functions do
not synchronize with anything (and, thus, can
avoid a memory fence).

[
2010 Batavia
]

The Concurrency subgroup reviewed this issue and deemed it to be an extension to be handled after publishing C++0x.

Rationale:

The LWG does not wish to make a change at this time.

[2017-03-01, Kona]

SG1 recommends: Close as NAD

Several participants voiced strong objections, based on either memory model issues or lock elision. No support. It is
already possible to write a wrapper that explicitly tracks ownership for testing in the owning thread, which may have
been part of the intent here.

The standard doesn't say that containers should use abstract pointer
types internally. Both Howard and Pablo agree that this is the intent.
Further, it is necessary for containers to be stored, for example, in
shared memory with an interprocess allocator (the type of scenario that
allocators are intended to support).

In spite of the (possible) agreement on intent, it is necessary to make
this explicit:

An implementations may like to store the result of dereferencing the
pointer (which is a raw reference) as an optimization, but that prevents
the data structure from being put in shared memory, etc. In fact, a
container could store raw references to the allocator, which would be a
little weird but conforming as long as it has one by-value copy.
Furthermore, pointers to locales, ctypes, etc. may be there, which also
prevents the data structure from being put in shared memory, so we
should make explicit that a container does not store raw pointers or
references at all.

[
Pre-batavia
]

This issue is being opened as part of the response to NB comments US-104/141.
See paper N3171
in the pre-Batavia mailing.

[2011-03-23 Madrid meeting]

Deferred

[
2011 Batavia
]

This may be an issue, but it is not clear. We want to gain a few years experience
with the C++11 allocator model to see if this is already implied by the existing
specification.

[..] In all container types defined in this Clause, the member get_allocator() returns
a copy of the allocator used to construct the container or, if that allocator has been replaced,
a copy of the most recent replacement. The container may not store internal objects whose
types are of the form T * or T & except insofar as they are part of the
item type or members.

During the Pittsburgh meeting the proposal N3066
became accepted because it fixed several severe issues related to the iterator specification. But the current working draft (N3225)
does not reflect all these changes. Since I'm unaware whether every correction can be done editorial, this issue is submitted to take
care of that. To give one example: All expressions of Table 108 — "Output iterator requirements" have a post-condition
that the iterator is incrementable. This is impossible, because it would exclude any finite sequence that is accessed by an output
iterator, such as a pointer to a C array. The N3066 wording changes did not have these effects.

[2011-03-01: Daniel comments:]

This issue has some overlap with the issue 2038 and I would prefer if we
could solve both at one location. I suggest the following approach:

The terms dereferencable and incrementable could be defined in a more
general way not restricted to iterators (similar to the concepts HasDereference and
HasPreincrement from working draft N2914). But on the other hand, all current usages of
dereferencable and incrementable are involved with types that satisfy
iterator requirements. Thus, I believe that it is sufficient for C++0x to add corresponding definitions to
27.2.1 [iterator.requirements.general] and to let all previous usages of these terms refer to this
sub-clause. Since the same problem occurs with the past-the-end iterator, this proposal suggest providing
similar references to usages that precede its definition as well.

We also need to ensure that all iterator expressions get either an operational semantics in
terms of others or we need to add missing pre- and post-conditions. E.g. we have the following
ones without semantics:

respectively. Please note especially the latter expression for bidirectional iterator. It fixes a problem
that we have for forward iterator as well: Both these iterator categories provide stronger guarantees
than input iterator, because the result of the dereference operation is reference, and not
only convertible to the value type (The exact form from the SGI documentation does not correctly refer to
reference).

[2011-03-14: Daniel comments and updates the suggested wording]

In addition to the before mentioned necessary changes there is another one need, which
became obvious due to issue 2042: forward_list<>::before_begin() returns
an iterator value which is not dereferencable, but obviously the intention is that it should
be incrementable. This leads to the conclusion that imposing dereferencable as a requirement
for the expressions ++r is wrong: We only need the iterator to be incrementable. A
similar conclusion applies to the expression --r of bidirectional iterators.

[
2011 Bloomington
]

Consensus this is the correct direction, but there are (potentially) missing incrementable
preconditions on some table rows, and the Remarks on when an output iterator becomes dereferencable
are probably better handled outside the table, in a manner similar to the way we word for input
iterators.

There was some concern about redundant pre-conditions when the operational semantic is defined in
terms of operations that have preconditions, and a similar level of concern over dropping such
redundancies vs. applying a consistent level of redundant specification in all the iterator tables.
Wording clean-up in either direction would be welcome.

There is only a small number of further changes suggested to get rid of superfluous
requirements and essentially non-normative assertions. Operations should not have extra
pre-conditions, if defined by "in-terms-of" semantics, see e.g. a != b or a->m
for Table 107. Further, some remarks, that do not impose anything or say nothing new have been removed,
because I could not find anything helpful they provide.
E.g. consider the remarks for Table 108 for the operations dereference-assignment and
preincrement: They don't provide additional information say nothing surprising. With the
new pre-conditions and post-conditions it is implied what the remarks intend to say.

The following sentence is dropped from the standard section on OutputIterators:

"In particular, the following two conditions should hold: first, any
iterator value should be assigned through before it is incremented
(this is, for an output iterator i, i++; i++; is not a valid code
sequence); second, any value of an output iterator may have at most
one active copy at any given time (for example, i = j; *++i = a; *j = b;
is not a valid code sequence)."

[
2011-11-04: Daniel comments and improves the wording
]

In regard to the first part of the comment, the intention of the newly proposed wording
was to make clear that for the expression

*r = o

we have the precondition dereferenceable and the post-condition
incrementable. And for the expression

++r

we have the precondition incrementable and the post-condition dereferenceable
or past-the-end. This should not allow for a sequence like i++; i++;
but I agree that it doesn't exactly say that.

In regard to the second point: To make this point clearer, I suggest to
add a similar additional wording as we already have for input iterator to the
"Assertion/note" column of the expression ++r:

"Post: any copies of the previous value of r are no longer
required to be dereferenceable or incrementable."

The proposed has been updated to honor the observations of Alexander Stepanov.

[2015-02 Cologne]

The matter is complicated, Daniel volunteers to write a paper.

Proposed resolution:

Add a reference to 27.2.1 [iterator.requirements.general] to the following parts of the
library preceding Clause 24 Iterators library: (I stopped from 26.2.7 [unord.req] on, because
the remaining references are the concrete containers)

Edit 27.2.1 [iterator.requirements.general] p. 5 as indicated (The intent is to properly define
incrementable and to ensure some further library guarantee related to past-the-end iterator values):

-5- Just as a regular pointer to an array guarantees that there is a pointer value pointing past the last element
of the array, so for any iterator type there is an iterator value that points past the last element of a
corresponding sequence. These values are called past-the-end values. Values of an iterator i for which the
expression *i is defined are called dereferenceable. Values of an iterator i for which the
expression ++i is defined are called incrementable. The library never assumes that
past-the-end values are dereferenceable or incrementable. Iterators can also have singular values
that are not associated with any sequence. […]

Modify the column contents of Table 107 — "Input iterator requirements",
27.2.3 [input.iterators], as indicated [Rationale: The wording changes attempt
to define a minimal "independent" set of operations, namely *a and ++r, and
to specify the semantics of the remaining ones. This approach seems to be in agreement with the
original SGI specification
— end rationale]:

Table 107 — Input iterator requirements (in addition to Iterator)

Expression

Return type

Operational semantics

Assertion/notepre-/post-condition

a != b

contextually
convertible to bool

!(a == b)

pre: (a, b) is in the domain
of ==.

*a

convertible to T

pre: a is dereferenceable.
The expression(void)*a, *a is equivalent
to *a.
If a == b and (a,b) is in
the domain of == then *a is
equivalent to *b.

a->m

(*a).m

pre: a is dereferenceable.

++r

X&

pre: r is dereferenceableincrementable.
post: r is dereferenceable orr is past-the-end.
post: any copies of the
previous value of r are no
longer required either to be
dereferenceable, incrementable,
or to be in the domain of ==.

(void)r++

(void)++r

equivalent to (void)++r

*r++

convertible to T

{ T tmp = *r;
++r;
return tmp; }

Modify the column contents of Table 108 — "Output iterator requirements",
27.2.4 [output.iterators], as indicated [Rationale: The wording changes attempt
to define a minimal "independent" set of operations, namely *r = o and ++r,
and to specify the semantics of the remaining ones. This approach seems to be in agreement with
the original SGI specification
— end rationale]:

Table 108 — Output iterator requirements (in addition to Iterator)

Expression

Return type

Operational semantics

Assertion/notepre-/post-condition

*r = o

result is not used

pre: r is dereferenceable.Remark: After this operationr is not required to be
dereferenceable and any copies of
the previous value of r are no
longer required to be dereferenceable
or incrementable.
post: r is incrementable.

++r

X&

pre: r is incrementable.&r == &++r.Remark: After this operationr is not required to be
dereferenceable.Remark: After this operationr is not required to be
incrementable and any copies of
the previous value of r are no
longer required to be dereferenceable
or incrementable.
post: r is dereferenceable
or r is past-the-endincrementable.

r++

convertible to const X&

{ X tmp = r;
++r;
return tmp; }

Remark: After this operationr is not required to be
dereferenceable.
post: r is incrementable.

*r++ = o

result is not used

{ *r = o; ++r; }

Remark: After this operationr is not required to be
dereferenceable.
post: r is incrementable.

Modify the column contents of Table 109 — "Forward iterator requirements",
27.2.5 [forward.iterators], as indicated [Rationale: Since the return type of the
expression *r++ is now guaranteed to be type reference, the implied operational
semantics from input iterator based on value copies is wrong — end rationale]

"In "24.2.4 Output iterators" there are 3 uses of incrementable. I've
not found the definition. Could some one point me where it is defined?

Something similar occurs with dereferenceable. While the definition is
given in "24.2.1 In general" it is used several times before.

Shouldn't these definitions be moved to some previous section?"

He's right: both terms are used without being properly defined.

There is no definition of "incrementable".

While there is a definition of "dereferenceable", it is, in fact, a definition of
"dereferenceable iterator". "dereferenceable" is used throughout Clause 23 (Containers)
before its definition in Clause 24. In almost all cases it's referring to iterators,
but in 20.5.3.2 [swappable.requirements] there is a mention of "dereferenceable object"; in
20.5.3.5 [allocator.requirements] the table of Descriptive variable definitions refers to a
"dereferenceable pointer"; 23.10.3.2 [pointer.traits.functions] refers to a
"dereferenceable pointer"; in 25.4.5.1.2 [locale.time.get.virtuals]/11 (do_get)
there is a requirement that a pointer "shall be dereferenceable". In those specific cases
it is not defined.

[2011-03-02: Daniel comments:]

I believe that the currently proposed resolution of issue 2035 solves this
issue as well.

In particular the lack of is_nothrow_convertible is severly restricting. This
was recently recognized when the proposal for decay_copy was prepared by
n3255.
There does not exist a portable means to define the correct conditional noexcept
specification for the decay_copy function template, which is declared as:

The semantics of decay_copy bases on an implicit conversion which again
influences the overload set of functions that are viable here. In most circumstances
this will have the same effect as comparing against the trait
std::is_nothrow_move_constructible, but there is no guarantee for that being
the right answer. It is possible to construct examples, where this would lead
to the false result, e.g.

std::is_nothrow_move_constructible will properly honor the explicit template
constructor because of the direct-initialization context which is part of the
std::is_constructible definition and will in this case select it, such that
std::is_nothrow_move_constructible<S>::value == true, but if we had
the traits is_nothrow_convertible, is_nothrow_convertible<S, S>::value
would evaluate to false, because it would use the copy-initialization context
that is part of the is_convertible definition, excluding any explicit
constructors and giving the opposite result.

The decay_copy example is surely not one of the most convincing examples, but
is_nothrow_convertible has several use-cases, and can e.g. be used to express
whether calling the following implicit conversion function could throw an exception or not:

The currently agreed on proposed wording for 2015 using
remove_all_extents<T>::type instead of the "an array of
unknown bound" terminology in the precondition should be extended to
some further entries especially in Table 49, notably the
is_*constructible, is_*assignable, and
is_*destructible entries. To prevent ODR violations, incomplete
element types of arrays must be excluded for value-initialization and
destruction for example. Construction and assignment has to be honored,
when we have array-to-pointer conversions or pointer conversions of
incomplete pointees in effect.

[2012, Kona]

The issue is that in three type traits, we are accidentally saying that in certain
circumstances the type must give a specified answer when given an incomplete type.
(Specifically: an array of unknown bound of incomplete type.) The issue asserts
that there's an ODR violation, since the trait returns false in that case but might
return a different version when the trait is completed.

Howard argues: no, there is no risk of an ODR violation.
is_constructible<A[]> must return false regardless of whether
A is complete, so there's no reason to forbid an array of unknown bound of
incomplete types. Same argument applies to is_assignable. General agreement
with Howard's reasoning.

There may be a real issue for is_destructible. None of us are sure what
is_destructible is supposed to mean for an array of unknown bound
(regardless of whether its type is complete), and the standard doesn't make it clear.
The middle column doesn't say what it's supposed to do for incomplete types.

In at least one implementation, is_destructible<A[]> does return true
if A is complete, which would result in ODR violation unless we forbid it for
incomplete types.

Move to open. We believe there is no issue for is_constructible or
is_assignable, but that there is a real issue for is_destructible.

In N3290, which is to become the official standard, in 21.8.4.4 [terminate],
paragraph 1 reads

Remarks: Called by the implementation when exception handling must
be abandoned for any of several reasons (15.5.1), in effect immediately after
evaluating the throw-expression (18.8.3.1). May also be called directly by the
program.

It is not clear what is "in effect". It was clear in previous drafts where paragraphs
1 and 2 read:

Called by the implementation when exception handling must be
abandoned for any of several reasons (15.5.1). May also be called directly
by the program.

Effects: Calls the terminate_handler function in effect
immediately after evaluating the throw-expression (18.8.3.1), if called by the
implementation, or calls the current terminate_handler function,
if called by the program.

It was changed by N3189. The same applies to function unexpected (D. 11.4, paragraph 1).

Assuming the previous wording is still intended, the wording can be read
"unless std::terminate is called by the program, we will use the handler
that was in effect immediately after evaluating the throw-expression".

This assumes that there is some throw-expression connected to every
situation that triggers the call to std::terminate. But this is not
the case:

In case std::thread is assigned to or destroyed while being joinable
there is no throw-expression involved.

In case std::unexpected is called by the program, std::terminate is
triggered by the implementation - no throw-expression involved.

In case a destructor throws during stack unwinding we have two throw-expressions
involved.

Which one is referred to?

In case std::nested_exception::rethrow_nested is called for an object that has
captured no exception, there is no throw-expression involved directly (and may no throw
be involved even indirectly).

Required behavior: A terminate_handler shall terminate execution
of the program without returning to the caller.

This seems to allow that the function may exit by throwing an
exception (because word "return" implies a normal return).

One could argue that words "terminate execution of the program" are sufficient,
but then why "without returning to the caller" would be mentioned. In
case such handler throws, noexcept specification in function std::terminate
is violated, and std::terminate would be called recursively - should
std::abort not be called in case of recursive std::terminate
call? On the other hand some controlled recursion could be useful, like in the
following technique.

The here mentioned wording changes by N3189 in regard to 21.8.4.4 [terminate] p1
were done for a better separation of effects (Effects element) and additional normative
wording explanations (Remarks element), there was no meaning change intended. Further,
there was already a defect existing in the previous wording, which was not updated when
further situations where defined, when std::terminate where supposed to be
called by the implementation.

The part

"in effect immediately after evaluating the throw-expression"

should be removed and the quoted reference to 21.8.4.1 [terminate.handler]
need to be part of the effects element where it refers to the current terminate_handler
function, so should be moved just after

"Effects: Calls the current terminate_handler function."

It seems ok to allow a termination handler to exit via an exception, but the
suggested idiom should better be replaced by a more simpler one based on
evaluating the current exception pointer in the terminate handler, e.g.

When the EmplaceConstructible (26.2.1 [container.requirements.general]/13) requirement is used
to initialize an object, direct-initialization occurs. Initializing an aggregate or using a std::initializer_list
constructor with emplace requires naming the initialized type and moving a temporary. This is a result of
std::allocator::construct using direct-initialization, not list-initialization (sometimes called "uniform
initialization") syntax.

Altering std::allocator<T>::construct to use list-initialization would, among other things, give
preference to std::initializer_list constructor overloads, breaking valid code in an unintuitive and
unfixable way — there would be no way for emplace_back to access a constructor preempted by
std::initializer_list without essentially reimplementing push_back.

The proposed compromise is to use SFINAE with std::is_constructible, which tests whether direct-initialization
is well formed. If is_constructible is false, then an alternative std::allocator::construct overload
is chosen which uses list-initialization. Since list-initialization always falls back on direct-initialization, the
user will see diagnostic messages as if list-initialization (uniform-initialization) were always being used, because
the direct-initialization overload cannot fail.

I can see two corner cases that expose gaps in this scheme. One occurs when arguments intended for
std::initializer_list satisfy a constructor, such as trying to emplace-insert a value of {3, 4} in
the above example. The workaround is to explicitly specify the std::initializer_list type, as in
v.emplace_back(std::initializer_list<int>(3, 4)). Since this matches the semantics as if
std::initializer_list were deduced, there seems to be no real problem here.

The other case is when arguments intended for aggregate initialization satisfy a constructor. Since aggregates cannot
have user-defined constructors, this requires that the first nonstatic data member of the aggregate be implicitly
convertible from the aggregate type, and that the initializer list have one element. The workaround is to supply an
initializer for the second member. It remains impossible to in-place construct an aggregate with only one nonstatic
data member by conversion from a type convertible to the aggregate's own type. This seems like an acceptably small
hole.

The change is quite small because EmplaceConstructible is defined in terms of whatever allocator is specified,
and there is no need to explicitly mention SFINAE in the normative text.

[2012, Kona]

Move to Open.

There appears to be a real concern with initializing aggregates, that can be performed only
using brace-initialization. There is little interest in the rest of the issue, given the existence
of 'emplace' methods in C++11.

Move to Open, to find an acceptable solution for intializing aggregates. There is the potential
that EWG may have an interest in this area of language consistency as well.

Jonathan suggests to make the new constructors non-explicit and makes some representational improvements.

[2013-09 Chicago]

Move to deferred.

This issue has much in common with similar problems with std::function that are being addressed
by the polymorphic allocators proposal currently under evaluation in LEWG. Defer further discussion on
this topic until the final outcome of that paper and its proposed resolution is known.

for types satisfying the EqualityComparable or LessThanComparable
types, respectively, are required to be "convertible to bool" which corresponds to
a copy-initialization context. But several newer parts of the library that refer to
such contexts have lowered the requirements taking advantage of the new terminology of
"contextually convertible to bool" instead, which corresponds to a
direct-initialization context (In addition to "normal" direct-initialization constructions,
operands of logical operations as well as if or switch conditions also
belong to this special context).

One example for these new requirements are input iterators which satisfy EqualityComparable
but also specify that the expression

a != b

shall be just "contextually convertible to bool". The same discrepancy
exists for requirement set NullablePointer in regard to several equality-related expressions.

For random access iterators we have

a < b contextually convertible to bool

as well as for all derived comparison functions, so strictly speaking we could have a random access
iterator that does not satisfy the LessThanComparable requirements, which looks like an
artifact to me.

Even if we keep with the existing requirements based on LessThanComparable or
EqualityComparable we still would have the problem that some current specifications
are actually based on the assumption of implicit convertibility instead of "explicit convertibility", e.g.
23.11.1.5 [unique.ptr.special] p3:

In all these places the expressions involving comparison functions (but not those of the conversion
of a NullablePointer to bool!) assume to be "convertible to bool". I think this
is a very natural assumption and all delegations of the comparison functions of some type X to some
other API type Y in third-party code does so assuming that copy-initialization semantics will
just work.

The actual reason for using the newer terminology can be rooted back to LWG 556. My hypotheses
is that the resolution of that issue also needs a slight correction. Why so?

The reason for opening that issue were worries based on the previous "convertible to bool"
wording. An expressions like "!pred(a, b)" might not be well-formed in those situations, because
operator! might not be accessible or might have an unusual semantics (and similarly for other logical
operations). This can indeed happen with unusual proxy return types, so the idea was that the evaluation of
Predicate, BinaryPredicate (28.1 [algorithms.general] p8+9), and Compare
(28.7 [alg.sorting] p2) should be defined based on contextual conversion to bool.
Unfortunately this alone is not sufficient: In addition, I think, we also want the predicates
to be (implicitly) convertible to bool! Without this wording, several conditions are plain wrong,
e.g. 28.5.5 [alg.find] p2, which talks about "pred(*i) != false" (find_if) and
"pred(*i) == false" (find_if_not). These expressions are not within a boolean context!

While we could simply fix all these places by proper wording to be considered in a "contextual conversion to
bool", I think that this is not the correct solution: Many third-party libraries already refer to
the previous C++03 Predicate definition — it actually predates C++98 and is as old as the
SGI specification. It seems to be a high price to
pay to switch to direct initialization here instead of fixing a completely different specification problem.

If a parameter is Predicate, operator() applied to the actual template argument shall return a value that
is convertible to bool.

The problem here is not that we have two different definitions of Predicate in the standard — this
is confusing, but this fact alone is not a defect. The first (minor) problem is that this definition does not properly
apply to function objects that are function pointers, because operator() is not defined in a strict sense.
But the actually worse second problem is that this wording has the very same problem that has originally lead to
LWG 556! We only need to look at 33.5.3 [thread.condition.condvar] p15 to recognice this:

while (!pred())
wait(lock);

The negation expression here looks very familiar to the example provided in LWG 556 and is sensitive
to the same "unusual proxy" problem. Changing the 33.2.1 [thread.req.paramname] wording to a corresponding
"contextual conversion to bool" wouldn't work either, because existing specifications rely on "convertible
to bool", e.g. 33.5.3 [thread.condition.condvar] p32+33+42 or 33.5.4 [thread.condition.condvarany]
p25+26+32+33.

To summarize: I believe that LWG 556 was not completely resolved. A pessimistic interpretation is,
that even with the current wording based on "contextually convertible to bool" the actual problem of that
issue has not been fixed. What actually needs to be required here is some normative wording that basically
expresses something along the lines of:

The semantics of any contextual conversion to bool shall be equivalent to the semantics of
any implicit conversion to bool.

This is still not complete without having concepts, but it seems to be a better approximation. Another way of solving
this issue would be to define a minimum requirements table with equivalent semantics. The proposed wording is a bit
simpler but attempts to express the same thing.

[2012, Kona]

Agree with Daniel that we potentially broke some C++03 user code, accept the changes striking
"contextually" from tables. Stefan to provide revised wording for section 25, and figure out
changes to section 30.

Move to open, and then to Review when updated wording from Stefan is available.

[2012-10-12, STL comments]

The current proposed resolution still isn't completely satisfying. It would certainly be possible for the Standard to
require these various expressions to be implicitly and contextually convertible to bool, but that would have
a subtle consequence (which, I will argue, is undesirable - regardless of the fact that it dates all the way back to
C++98/03). It would allow users to provide really wacky types to the Standard Library, with one of two effects:

Standard Library implementations would have to go to great lengths to respect such wacky types, essentially using
static_cast<bool> when invoking any predicates or comparators.

Otherwise, such wacky types would be de facto nonportable, because they would make Standard Library implementations
explode.

Effect B is the status quo we're living with today. What Standard Library implementations want to do with pred(args)
goes beyond "if (pred(args))" (C++03), contextually converting pred(args) to bool (C++11), or
implicitly and contextually converting pred(args) to bool (the current proposed resolution).
Implementations want to say things like:

if (pred(args))
if (!pred(args))
if (cond && pred(args))
if (cond && !pred(args))

These are real examples taken from Dinkumware's implementation. There are others that would be realistic
("pred(args) && cond", "cond || pred(args)", etc.)

Although negation was mentioned in this issue's Discussion section, and in LWG 556's, the current proposed
resolution doesn't fix this problem. Requiring pred(args) to be implicitly and contextually convertible to bool
doesn't prevent operator!() from being overloaded and returning std::string (as a wacky example). More
ominously, it doesn't prevent operator&&() and operator||() from being overloaded and destroying
short-circuiting.

I would like LWG input before working on Standardese for a new proposed resolution. Here's an outline of what I'd like to
do:

Introduce a new "concept" in 20.5.3 [utility.requirements], which I would call BooleanTestable in the
absence of better ideas.

Centralize things and reduce verbosity by having everything simply refer to BooleanTestable when necessary.
I believe that the tables could say "Return type: BooleanTestable", while Predicate/BinaryPredicate/Compare
would need the incantation "shall satisfy the requirements of BooleanTestable".

Resolve the tug-of-war between users (who occasionally want to do weird things) and implementers (who don't want to have
to contort their code) by requiring that:

Given a BooleanTestable x, x is both implicitly and contextually convertible to bool.

Given a BooleanTestable x, !x is BooleanTestable. (This is intentionally "recursive".)

Given a BooleanTestable x and a BooleanTestable y of possibly different types, "x && y"
and "x || y" invoke the built-in operator&&() and operator||(), triggering short-circuiting.

bool is BooleanTestable.

I believe that this simultaneously gives users great latitude to use types other than bool, while allowing
implementers to write reasonable code in order to get their jobs done. (If I'm forgetting anything that implementers
would want to say, please let me know.)

About requirement (I): As Daniel patiently explained to me, we need to talk about both implicit conversions and
contextual conversions, because it's possible for a devious type to have both "explicit operator bool()"
and "operator int()", which might behave differently (or be deleted, etc.).

About requirement (IV): This is kind of tricky. What we'd like to say is, "BooleanTestable can't ever trigger
an overloaded logical operator". However, given a perfectly reasonable type Nice - perhaps even bool itself! -
other code (perhaps a third-party library) could overload operator&&(Nice, Evil). Therefore, I believe
that the requirement should be "no first use" - the Standard Library will ask for various BooleanTestable types
from users (for example, the result of "first != last" and the result of "pred(args)"), and as long
as they don't trigger overloaded logical operators with each other, everything is awesome.

About requirement (V): This is possibly redundant, but it's trivial to specify, makes it easier for users to understand
what they need to do ("oh, I can always achieve this with bool"), and provides a "base case" for requirement
(IV) that may or may not be necessary. Since bool is BooleanTestable, overloading
operator&&(bool, Other) (etc.) clearly makes the Other type non-BooleanTestable.

-8- The Predicate parameter is used whenever an algorithm expects a function object
(23.14 [function.objects]) that, when applied to the result of dereferencing the corresponding iterator,
returns a value testable as true. In other words, if an algorithm takes Predicate pred
as its argument and first as its iterator argument, it should work correctly in the construct
pred(*first)implicitly or contextually converted to bool (Clause 7 [conv]).
The function object pred shall not apply any non-constant function through the dereferenced iterator.

-9- The BinaryPredicate parameter is used whenever an algorithm expects a function object that when applied
to the result of dereferencing two corresponding iterators or to dereferencing an iterator and type
T when T is part of the signature returns a value testable as true. In other words, if an algorithm takes
BinaryPredicate binary_pred as its argument and first1 and first2 as its iterator arguments, it should
work correctly in the construct binary_pred(*first1, *first2)implicitly or contextually converted to
bool (Clause 7 [conv]).
BinaryPredicate always takes the first iterator's value_type as its first argument, that is, in those cases
when T value is part of the signature, it should work correctly in the construct binary_pred(*first1, value)implicitly or contextually converted to bool (Clause 7 [conv]). binary_pred shall
not apply any non-constant function through the dereferenced iterators.

-2- Compare is a function object type (23.14 [function.objects]). The return value of the function
call operation applied to an object of type Compare, when implicitly or contextually converted
to bool (7 [conv]), yields true if the first argument of the call is less than the second, and
false otherwise. Compare comp is used throughout for algorithms assuming an ordering relation. It is assumed
that comp will not apply any non-constant function through the dereferenced iterator.

-2- If a parameter is Predicate, operator() applied to the actual template argument shall return a value that
is convertible to boolPredicate is a function object type (23.14 [function.objects]).
The return value of the function call operation applied to an object of type Predicate, when implicitly or
contextually converted to bool (7 [conv]), yields true if the corresponding test condition is
satisfied, and false otherwise.

The presented wording follows relatively closely STL's outline with the following notable exceptions:

A reference to BooleanTestable in table "Return Type" specifications seemed very unusual to me and
I found no "prior art" for this in the Standard. Instead I decided to follow the usual style to add a symbol
with a specific meaning to a specific paragraph that specifies symbols and their meanings.

STL's requirement IV suggested to directly refer to built-in operators && and ||. In my
opinion this concrete requirement isn't needed if we simply require that two BooleanTestable operands behave
equivalently to two those operands after conversion to bool (each of them).

I couldn't find a good reason to require normatively that type bool meets the requirements of BooleanTestable: My
assertion is that after having defined them, the result simply falls out of this. But to make this a bit clearer, I added
also a non-normative note to these effects.

[2014-06-10, STL comments]

In the current wording I would like to see changed the suggested changes described by bullet #6:

Then change the 7 occurrences of "convertible to bool" in the denoted tables to "bool".

[2015-05-05 Lenexa]

STL: Alisdair wanted to do something here, but Daniel gave us updated wording.

[2015-07 Telecon]

Alisdair: Should specify we don't break short circuiting.
Ville: Looks already specified because that's the way it works for bool.
Geoffrey: Maybe add a note about the short circuiting.
B2/P2 is somewhat ambiguous. It implies that B has to be both implicitly convertible to bool and contextually convertible to bool.
We like this, just have nits.
Status stays Open.
Marshall to ping Daniel with feedback.

[2016-02-27, Daniel updates wording]

The revised wording has been updated from N3936 to N4567.

To satisfy the Kona 2015 committee comments, the wording in
[booleantestable.requirements] has been improved to better separate the two different requirements of "can be
contextually converted to bool" and "can be implicitly converted to bool. Both are necessary because
it is possible to define a type that has the latter property but not the former, such as the following one:

2016-08-07, Daniel: The below example has been corrected to reduce confusion about the performed conversions as indicated
by the delta markers:

In [booleantestable.requirements] a note has been added to ensure that an implementation is not allowed to
break any short-circuiting semantics.

I decided to separate LWG 2587/2588 from this issue. Both these issues aren't exactly the
same but depending on the committee's position, their resolution might benefit from the new vocabulary introduced
here.

-1- […] In these tables, T is an object or reference type to be supplied by a C++ program
instantiating a template; a, b, and c are values of type (possibly const) T;
s and t are modifiable lvalues of type T; u denotes an identifier; rv
is an rvalue of type T; andv is an lvalue of type (possibly const) T or an
rvalue of type const T; and BT denotes a type that meets the BooleanTestable
requirements ([booleantestable.requirements]).

[…]

Table 17 — EqualityComparable requirements [equalitycomparable]

Expression

Return type

Requirement

a == b

convertible toboolBT

== is an equivalence relation, that is, it has the
following properties: […]

-?- A BooleanTestable type is a boolean-like type that also supports conversions to bool.
A type B meets the BooleanTestable requirements if the expressions described in Table ?? are valid
and have the indicated semantics, and if B also satisfies all the other requirements of this sub-clause
[booleantestable.requirements].

An object b of type B can be implicitly converted to bool and in addition can be
contextually converted to bool (Clause 4). The result values of both kinds of conversions shall be equivalent.

[Example: The types bool, std::true_type, and std::bitset<>::reference are
BooleanTestable types. — end example]

For the purpose of Table ??, let B2 and Bn denote types (possibly both equal to B or to each other)
that meet the BooleanTestable requirements, let b1 denote a (possibly const) value of B,
let b2 denote a (possibly const) value of B2, and let t1 denote a value of type
bool.

[Note: These rules ensure what an implementation can rely on but doesn't grant it
license to break short-circuiting behavior of a BooleanTestable type. — end note]

Somewhere within the new sub-clause [booleantestable.requirements] insert the following new Table (?? denotes
the assigned table number):

-1- Requires: For all i, where 0 <= i and i < sizeof...(TTypes),
get<i>(t) == get<i>(u) is a valid expression returning a type that is convertible to
boolmeets the BooleanTestable requirements ([booleantestable.requirements]).
sizeof...(TTypes) == sizeof...(UTypes).

-4- Requires: For all i, where 0 <= i and i < sizeof...(TTypes),
get<i>(t) < get<i>(u) and get<i>(u) < get<i>(t) are valid
expressions returning types that are convertible to
boolmeet the BooleanTestable requirements ([booleantestable.requirements]).
sizeof...(TTypes) == sizeof...(UTypes).

-12- In the following sections, a and b denote values of type X or const X,
difference_type and reference refer to the types iterator_traits<X>::difference_type and
iterator_traits<X>::reference, respectively, n denotes a value of difference_type, u,
tmp, and m denote identifiers, r denotes a value of X&, t denotes
a value of value type T, o denotes a value of some type that is writable to the output iterator, and BT
denotes a type that meets the BooleanTestable requirements ([booleantestable.requirements]).

[Drafting note: The wording changes below also fix
(a) unusual wording forms used ("should work") which are unclear in which sense they are imposing normative requirements and
(b) the problem, that the current wording seems to allow that the predicate may mutate a call argument, if that is not a
dereferenced iterator.
Upon applying the new wording it became obvious that the both the previous and the new wording has the effect that currently
algorithms such as adjacent_find, search_n, unique, and unique_copy are not correctly
described (because they have no iterator argument named first1), which could give raise to a new library issue.
— end drafting note]

-8- The Predicate parameter is used whenever an algorithm expects a function object (20.9) that, when applied
to the result of dereferencing the corresponding iterator, returns a value testable as true. In other words,
iIf an algorithm takes Predicate pred as its argument and first as its iterator argument,
it should work correctly in the construct pred(*first) contextually converted to
bool (Clause 4)the expression pred(*first) shall have a type that meets the BooleanTestable
requirements ( [booleantestable.requirements]).
The function object pred shall not apply any non-constant function through the dereferenced
iteratorits argument.

-9- The BinaryPredicate parameter is used whenever an algorithm expects a function object that when applied
to the result of dereferencing two corresponding iterators or to dereferencing an iterator and type
T when T is part of the signature returns a value testable as true. In other words,
iIf an algorithm takes BinaryPredicate binary_pred as its argument and first1 and
first2 as its iterator arguments, it should work correctly in the construct binary_pred(*first1, *first2)
contextually converted to bool (Clause 4)the expression binary_pred(*first1, *first2) shall
have a type that meets the BooleanTestable requirements ( [booleantestable.requirements]).
BinaryPredicate always takes the first iterator's value_type as its first argument, that is, in those cases
when T value is part of the signature, it should work correctly in the construct binary_pred(*first1, value)
contextually converted to bool (Clause 4)the expression binary_pred(*first1, value) shall have a
type that meets the BooleanTestable requirements ( [booleantestable.requirements]). binary_pred
shall not apply any non-constant function through the dereferenced iteratorsany of its arguments.

-2- Compare is a function object type (20.9). The return value of the function call
operation applied to an object of type Compare, when contextually converted
to bool(Clause 4), yields true if the first argument of the call is less than the second,
and false otherwise.Compare comp is used throughout for algorithms assuming an ordering relation.
Let a and b denote two argument values whose types depend on the corresponding algorithm. Then the expression
comp(a, b) shall have a type that meets the BooleanTestable requirements ( [booleantestable.requirements]).
The return value of comp(a, b), converted to bool, yields true if the
first argument a is less than the second argument b, and false otherwise. It is assumed that
comp will not apply any non-constant function through the dereferenced iteratorany of its arguments.

-1- Throughout this Clause, the names of template parameters are used to express type requirements. If a template
parameter is named Predicate, operator() applied to the template argument shall return a value that
is convertible to boolPredicate is a function object type (23.14 [function.objects]).
Let pred denote an lvalue of type Predicate. Then the expression pred() shall have a type that meets the
BooleanTestable requirements ( [booleantestable.requirements]). The return value of pred(),
converted to bool, yields true if the corresponding test condition is satisfied, and false otherwise.

Recently I received a Service Request (SR) alleging that one of our testcases causes an
undefined behavior. The complaint is that 29.7.8 [template.mask.array] in C++11
(and the corresponding subclause in C++03) are interpreted by some people to require that
in an assignment "a[mask] = b", the subscript mask and the rhs b
must have the same number of elements.

IMHO, if that is the intended requirement, it should be stated explicitly.

but the semicolon cannot be part of an expression. The correction could omit the
semicolon, or change the word "expression" to "assignment" or "statement".

Here is the text of the SR, slightly modified for publication:

Subject: SR01174 LVS _26322Y31 has undefined behavior [open]

[Client:]
The test case t263.dir/_26322Y31.cpp seems to be illegal as it has an undefined
behaviour. I searched into the SRs but found SRs were not related to the topic
explained in this mail (SR00324, SR00595, SR00838).

[Plum Hall:]
Before I log this as an SR, I need to check one detail with you.

I did read the email thread you mentioned, and I did find a citation (see INCITS ISO/IEC 14882-2003
Section 26.3.2.6 on valarray computed assignments):

Quote: "If the array and the argument array do not have the same length, the behavior is undefined",

But this applies to computed assignment (*=, +=, etc), not to simple assignment. Here is the C++03 citation
re simple assignment:

26.3.2.2 valarray assignment [lib.valarray.assign]

valarray<T>& operator=(const valarray<T>&);

1 Each element of the *this array is assigned the value of the corresponding element of the argument array.
The resulting behavior is undefined if the length of the argument array is not equal to the length of the
*this array.

In the new C++11 (N3291), we find ...

26.6.2.3 valarray assignment [valarray.assign]

valarray<T>& operator=(const valarray<T>& v);

1 Each element of the *this array is assigned the value of the corresponding element of the argument
array. If the length of v is not equal to the length of *this, resizes *this to make
the two arrays the same length, as if by calling resize(v.size()), before performing the assignment.

So it looks like the testcase might be valid for C++11 but not for C++03; what do you think?

This template is a helper template used by the mask subscript operator:
mask_array<T> valarray<T>::operator[](const valarray<bool>&).

It has reference semantics to a subset of an array specified by a boolean mask. Thus,
the expression a[mask] = b; has the effect of assigning the elements of b
to the masked elements in a (those for which the corresponding element in mask is true.)

1 These assignment operators have reference semantics, assigning the values of the argument array
elements to selected elements of the valarray<T> object to which it refers.

In particular, [one of the WG21 experts] insisted on the piece "the elements of b".

That is why I reported the test t263.dir/_26322Y31.cpp having an undefined behaviour.

[Plum Hall:]
OK, I can see that I will have to ask WG21; I will file an appropriate issue
with the Library subgroup. In the meantime, I will mark this testcase as "DISPUTED"
so that it is not required for conformance testing, until we get a definitive opinion.

[2012, Kona]

Moved to Open.

There appears to be a real need for clarification in the standard, and
implementations differ in their current interpretation. This will need
some research by implementers and a proposed resolution before further
discussion is likely to be fruitful.

IMO if we specified is_[nothrow_]constructible in terms of a variable
declaration whose validity requires destructibility, it is clearly a bug
in our specification and a failure to realize the actual original
intent. The specification should have been in terms of placement-new.

Daniel:
At the time of the specification this was intended and the solution is
not done by removing the destruction semantics of is_constructible.

The design of is_constructible was also impacted by the previous
Constructible concept that explicitly contained destruction semantics,
because during conceptification of the library it turned out to simplify
the constraints in the library because you did not need to add
Destructible all the time. It often was implied but never spoken out
in C++03.

Pure construction semantics was considered as useful as well, so HasConstructor
did also exist and would surely be useful as trait as well.

Another example that is often overlooked: This also affects wrapper types like pair,
tuple, array that contain potentially more than one type:
This is easy to understand if you think of T1 having a deleted destructor
and T2 having a constructor that may throw: Obviously the compiler has
potentially need to use the destructor of T1 in the constructor
of std::pair<T1, T2> to ensure that the core language requirements
are satisfied (All previous fully constructed sub-objects must be destructed).

A defaulted copy/move constructor for a class X is defined as deleted (11.4.3 [dcl.fct.def.delete])
if X has:
[…]
— any direct or virtual base class or non-static data member of a type with a destructor that is deleted
or inaccessible from the defaulted constructor,
[…]

Dave:
This is about is_nothrow_constructible in particular. The fact that it is
foiled by not having a noexcept dtor is a defect.

[2012, Kona]

Move to Open.

is_nothrow_constructible is defined in terms of is_constructible, which is defined
by looking at a hypothetical variable and asking whether the variable definition is known not to
throw exceptions. The issue claims that this also examines the type's destructor, given the context,
and thus will return false if the destructor can potentially throw. At least one
implementation (Howard's) does return false if the constructor is noexcept(true)
and the destructor is noexcept(false). So that's not a strained interpretation.
The issue is asking for this to be defined in terms of placement new, instead of in terms
of a temporary object, to make it clearer that is_nothrow_constructible looks at the
noexcept status of only the constructor, and not the destructor.

Sketch of what the wording would look like:

require is_constructible, and then also require that a placement new operation
does not throw. (Remembering the title of this issue... What does this imply for swap?

If we accept this resolution, do we need any changes to swap?

STL argues: no, because you are already forbidden from passing anything with a throwing
desturctor to swap.

Dietmar argues: no, not true. Maybe statically the destructor can conceivably throw for some
values, but maybe there are some values known not to throw. In that case, it's correct to
pass those values to swap.

Iostreams should include a manipulator to toggle grouping on/off for
locales that support grouped digits. This has come up repeatedly and
been deferred. See LWG 826 for the previous attempt.

If one is using a locale that supports grouped digits, then output
will always include the generated grouping characters. However, very
plausible scenarios exist where one might want to output the number,
un-grouped. This is similar to existing manipulators that toggle
on/off the decimal point, numeric base, or positive sign.

I think we should say nothing special about app at construction time (thus leaving the write pointer at the beginning of the buffer).
Leave implementers wiggle room to ensure subsequent append writes as they see fit, but don't change existing rules for initial seek
position.

The front matter in clause 17 should clarify that postconditions will not hold if a
standard library function exits via an exception. Postconditions or guarantees that
apply when an exception is thrown (beyond the basic guarantee) are described in an
"Exception safety" section.

[
2012-10 Portland: Move to Open
]

Consensus that we do not clearly say this, and that we probably should. A likely
location to describe the guarantees of postconditions could well be a new
sub-clause following 20.5.4.11 [res.on.required] which serves the same purpose
for requires clauses. However, we need such wording before we can make
progress.

Also, see 2137 for a suggestion that we want to see a paper resolving
both issues together.

[2015-05-06 Lenexa: EirkWF to write paper addressing 2136 and 2137]

MC: Idea is to replace all such "If no exception" postconditions with "Exception safety" sections.

Proposed resolution:

2137(i). Misleadingly constrained post-condition in the presence of exceptions

If no exception is thrown,flags() returns
f and mark_count() returns the number of marked sub-expressions within the expression.

The default expectation in the library is that post-conditions only hold, if there is no failure
(see also 2136), therefore the initial condition should be removed to prevent any
misunderstanding.

[
2012-10 Portland: Move to Open
]

A favorable resolution clearly depends on a favorable resolution to 2136.
There is also a concern that this is just one example of where we would want to apply
such a wording clean-up, and which is really needed to resolve both this issue and
2136 is a paper providing the clause 17 wording that gives the guarantee
for postcondition paragaraphs, and then reviews clauses 18-30 to apply that
guarantee consistently. We do not want to pick up these issues piecemeal, as we risk
openning many issues in an ongoing process.

The expression "user-defined type" is used in several places in the standard, but I'm not sure what
it means. More specifically, is a type defined in the standard library a user-defined type?

From my understanding of English, it is not. From most of the uses of this term in the standard, it
seem to be considered as user defined. In some places, I'm hesitant, e.g. 20.5.4.2.1 [namespace.std] p1:

A program may add a template specialization for any standard library template to namespace std
only if the declaration depends on a user-defined type and the specialization meets the standard library
requirements for the original template and is not explicitly prohibited.

Does it mean we are allowed to add in the namespace std a specialization for
std::vector<std::pair<T, U>>, for instance?

Additional remarks from the reflector discussion: The traditional meaning of user-defined types refers
to class types and enum types, but the library actually means here user-defined types that are not
(purely) library-provided. Presumably a new term - like user-provided type - should be introduced
and properly defined.

[
2012-10 Portland: Move to Deferred
]

The issue is real, in that we never define this term and rely on a "know it when I see it"
intuition. However, there is a fear that any attempt to pin down a definition is more
likely to introduce bugs than solve them - getting the wording for this precisely correct
is likely far more work than we are able to give it.

There is unease at simple closing as NAD, but not real enthusiasm to provide wording either.
Move to Deferred as we are not opposed to some motivated individual coming back with full
wording to review, but do not want to go out of our way to encourage someone to work on this
in preference to other issues.

[2014-02-20 Re-open Deferred issues as Priority 4]

[2015-03-05 Jonathan suggests wording]

I dislike the suggestion to change to "user-provided" type because I already find the
difference between user-declared / user-provided confusing for special member functions,
so I think it would be better to use a completely different term. The core language
uses "user-defined conversion sequence" and "user-defined literal" and
similar terms for things which the library provides, so I think we
should not refer to "user" at all to distinguish entities defined
outside the implementation from things provided by the implementation.

I propose "program-defined type" (and "program-defined specialization"), defined below.
The P/R below demonstrates the scope of the changes required, even if this name isn't adopted.
I haven't proposed a change for "User-defined facets" in [locale].

[Lenexa 2015-05-06]

RS, HT: The core language uses "user-defined" in a specific way, including library things but excluding core language things, let's use a different term.

MC: Agree.

RS: "which" should be "that", x2

RS: Is std::vector<MyType> a "program-defined type"?

MC: I think it should be.

TK: std::vector<int> seems to take the same path.

JW: std::vector<MyType> isn't program-defined, we don't need it to be, anything that depends on that also depends on =MyType.

TK: The type defined by an "explicit template specialization" should be a program-defined type.

RS: An implicit instantiation of a "program-defined partial specialization" should also be a program-defined type.

JY: This definition formatting is horrible and ugly, can we do better?

RS: Checking ISO directives.

RS: Define "program-defined type" and "program-defined specialization" instead, to get rid of the angle brackets.

<type> a class type or enumeration type which is not part of the C++
standard library and not defined by the implementation. [Note: Types
defined by the implementation include extensions (4.1 [intro.compliance])
and internal types used by the library. — end note]

program-defined

<specialization> an explicit template specialization or partial
specialization which is not part of the C++ standard library and not
defined by the implementation.

-1- The behavior of a C++ program is undefined if it adds declarations or definitions to namespace std or to a
namespace within namespace std unless otherwise specified. A program may add a template specialization
for any standard library template to namespace std only if the declaration depends on a
userprogram-defined type and the specialization meets the standard library requirements for the
original template and is not explicitly prohibited.

-2- The behavior of a C++ program is undefined if it declares

[…]

A program may explicitly instantiate a template defined in the standard library only if the declaration
depends on the name of a userprogram-defined type and the instantiation meets the standard
library requirements for the original template.

-4- The is_error_code_enum and is_error_condition_enum may be specialized for
userprogram-defined types to indicate that such types are eligible for class error_code
and class error_condition automatic conversions, respectively.

-1- Remarks: automatically detects […]. A program may specialize this template to derive from
true_type for a userprogram-defined type T that does not have a nested
allocator_type but nonetheless can be constructed with an allocator where either: […]

-2- Instantiations of the is_bind_expression template […]. A program may specialize
this template for a userprogram-defined type T to have a BaseCharacteristic
of true_type to indicate that T should be treated as a subexpression in a bind call.

-2- Instantiations of the is_placeholder template […]. A program may specialize this template for a
userprogram-defined type T to have a BaseCharacteristic of
integral_constant<int, N> with N > 0 to indicate that T should be
treated as a placeholder type.

The unordered associative containers defined in 23.5 use specializations of the class template hash […],
the instantiation hash<Key> shall:

[…]

[…]

[…]

[…]

satisfy the requirement that the expression h(k), where h is an object of type
hash<Key> and k is an object of type Key, shall not throw an exception unless
hash<Key> is a userprogram-defined specialization that depends on at least one
userprogram-defined type.

The member typedef type shall be
defined or omitted as specified below.
[…]. A program may
specialize this trait if at least one
template parameter in the
specialization is a userprogram-defined type.
[…]

-1- The behavior of a C++ program is undefined if it adds declarations or definitions to namespace std or to a
namespace within namespace std unless otherwise specified. A program may add a template specialization
for any standard library template to namespace std only if the declaration depends on a
userprogram-defined type and the specialization meets the standard library requirements for the
original template and is not explicitly prohibited.

-2- The behavior of a C++ program is undefined if it declares

[…]

A program may explicitly instantiate a template defined in the standard library only if the declaration
depends on the name of a userprogram-defined type and the instantiation meets the standard
library requirements for the original template.

-4- The is_error_code_enum and is_error_condition_enum may be specialized for
userprogram-defined types to indicate that such types are eligible for class error_code
and class error_condition automatic conversions, respectively.

-1- Remarks: automatically detects […]. A program may specialize this template to derive from
true_type for a userprogram-defined type T that does not have a nested
allocator_type but nonetheless can be constructed with an allocator where either: […]

-2- Instantiations of the is_bind_expression template […]. A program may specialize
this template for a userprogram-defined type T to have a BaseCharacteristic
of true_type to indicate that T should be treated as a subexpression in a bind call.

-2- Instantiations of the is_placeholder template […]. A program may specialize this template for a
userprogram-defined type T to have a BaseCharacteristic of
integral_constant<int, N> with N > 0 to indicate that T should be
treated as a placeholder type.

The unordered associative containers defined in 23.5 use specializations of the class template hash […],
the instantiation hash<Key> shall:

[…]

[…]

[…]

[…]

satisfy the requirement that the expression h(k), where h is an object of type
hash<Key> and k is an object of type Key, shall not throw an exception unless
hash<Key> is a userprogram-defined specialization that depends on at least one
userprogram-defined type.

The member typedef type shall be
defined or omitted as specified below.
[…]. A program may
specialize this trait if at least one
template parameter in the
specialization is a userprogram-defined type.
[…]

The template definitions in the C++ standard library refer to various named requirements whose details are set out in
tables 17-24. In these tables, T is an object or reference type to be supplied by a C++ program instantiating
a template; a, b, and c are values of type (possibly const) T; s
and t are modifiable lvalues of type T; u denotes an identifier; rv is an rvalue of
type T; and v is an lvalue of type (possibly const) T or an rvalue of type const T.

Is it really intended that T may be a reference type? If so, what should a, b, c,
s, t, u, rv, and v mean? For example, are "int &" and
"int &&" MoveConstructible?

As far as I understand, we can explicitly specify template arguments for std::swap and std::for_each.
Can we use reference types there?

Requires: F and each Ti in Args shall satisfy the MoveConstructible requirements.

When the first argument of this constructor is an lvalue (e.g. a name of a global function), template argument for F
is deduced to be lvalue reference type. What should "MoveConstructible" mean with regard to an lvalue reference
type? Maybe the wording should say that std::decay<F>::type and each std::decay<Ti>::type (where
Ti is an arbitrary item in Args) shall satisfy the MoveConstructible requirements?

[2013-03-15 Issues Teleconference]

Moved to Open.

The questions raised by the issue are real, and should have a clear answer.

[2015-10, Kona Saturday afternoon]

STL: std::thread needs to be fixed, and anything behaving like it needs to be fixed, rather than reference types. std::bind gets this right. We need to survey this. GR: That doesn't sound small to me. STL: Seach for CopyConstructible etc. It may be a long change, but not a hard one.

MC: It seems that we don't have a PR. Does anyone have one? Is anyone interested in doing a survey?

[2016-03, Jacksonville]

Casey volunteers to make a survey

[2016-06, Oulu]

During an independent survey performed by Daniel as part of the analysis of LWG 2716,
some overlap was found between these two issues. Daniel suggested to take responsibility for surveying
LWG 2146 and opined that the P/R of LWG 2716 should restrict to forwarding
references, where the deduction to lvalue references can happen without providing an explicit template
argument just by providing an lvalue function argument.

In C++11, basic_string is not described as a "container", and is not governed by the allocator-aware
container semantics described in sub-clause 26.2 [container.requirements]; as a result, and
requirements or contracts for the basic_string interface must be documented in Clause
24 [strings].

Sub-clause 24.3.2.6.8 [string.swap] defines the swap member function with no requirements, and
with guarantees to execute in constant time without throwing. Fulfilling such a contract is not reasonable
in the presence of unequal non-propagating allocators.

In contrast, 26.2.1 [container.requirements.general] p7 declares the behavior of member swap
for containers with unequal non-propagating allocators to be undefined.

Resolution proposal:

Additional language from Clause 26 [containers] should probably be copied to Clause
24 [strings]. I will refrain from an exactly recommendation, however, as I am raising further
issues related to the language in Clause 26 [containers].

Sub-clause 20.5.3.2 [swappable.requirements] defines two notions of swappability: a binary version defining
when two objects are swappable with one another, and a unary notion defining whether an object is
swappable (without qualification), with the latter definition requiring that the object satisfy the
former with respect to all values of the same type.

Let T be a container type based on a non-propagating allocator whose instances do not necessarily
compare equal. Then sub-clause 26.2.1 [container.requirements.general] p7 implies that no object t
of type T is swappable (by the unary definition).

Throughout the standard it is the unary definition of "swappable" that is listed as a requirement (with the
exceptions of 23.2.2 [utility.swap] p4, 23.4.2 [pairs.pair] p31, 23.5.3.3 [tuple.swap] p2,
28.6.3 [alg.swap] p2, and 28.6.3 [alg.swap] p6, which use the binary definition). This renders
many of the mutating sequence algorithms of sub-clause 28.6 [alg.modifying.operations], for example,
inapplicable to sequences of standard container types, even where every element of the sequence is swappable
with every other.

Note that this concern extends beyond standard containers to all future allocator-based types.

Resolution proposal:

I see two distinct straightforward solutions:

Modify the requirements of algorithms from sub-clause 28.6 [alg.modifying.operations], and all other
places that reference the unary "swappable" definition, to instead use the binary "swappable with" definition
(over a domain appropriate to the context). The unary definition of "swappable" could then be removed from the
standard.

I favor the latter solution, for reasons detailed in the following issue.

[
2012-10 Portland: Move to Open
]

The issue is broader than containers with stateful allocotors, although they are the most obvious
example contained within the standard itself. The basic problem is that once you have a stateful
allocator, that does not propagate_on_swap, then whether two objects of this type can be
swapped with well defined behavior is a run-time property (the allocators compare equal) rather
than a simple compile-time property that can be deduced from the type. Strictly speaking, any
type where the nature of swap is a runtime property does not meet the swappable
requirements of C++11, although typical sequences of such types are going to have elements that
are all swappable with any other element in the sequence (using our other term of art
for specifying requirements) as the common case is a container of elements who all share the
same allocator.

The heart of the problem is that the swappable requirments demand that any two objects
of the same type be swappable with each other, so if any two such objects would not
be swappable with each other, then the whole type is never swappable. Many
algorithms in clause 25 are specified in terms of swappable which is essentially an
overspecification as all they actually need is that any element in the sequence is swappable
with any other element in the sequence.

At this point Howard joins the discussion and points out that the intent of introducing the
two swap-related terms was to support vector<bool>::reference types, and we are
reading something into the wording that was never intended. Consuses is that regardless of
the intent, that is what the words today say.

There is some support to see a paper reviewing the whole of clause 25 for this issue, and
other select clauses as may be necessary.

There was some consideration to introducing a note into the front of clause 25 to indicate
swappable requirements in the clause should be interpreted to allow such awkward
types, but ultimately no real enthusiasm for introducing a swappable for clause 25
requirement term, especially if it confusingly had the same name as a term used with a
subtly different meaning through the rest of the standard.

There was no enthusiasm for the alternate resolution of requiring containers with unequal
allocators that do not propagate provide a well-defined swap behavior, as it is not
believed to be possible without giving swap linear complexity for such values,
and even then would require adding the constraint that the container element types are
CopyConstructible.

Final conclusion: move to open pending a paper from a party with a strong interest in
stateful allocators.

Sub-clause 23.2.2 [utility.swap] defines a non-member 'swap' function with defined behavior for
all MoveConstructible and MoveAssignable types. It does not guarantee
constant-time complexity or noexcept in general, however this definition does
render all objects of MoveConstructible and MoveAssignable type swappable
(by the unary definition of sub-clause 20.5.3.2 [swappable.requirements]) in the absence of
specializations or overloads.

The overload of the non-member swap function defined in Table 96, however,
defines semantics incompatible with the generic non-member swap function,
since it is defined to call a member swap function whose semantics are
undefined for some values of MoveConstructible and MoveAssignable types.

The obvious (perhaps naive) interpretation of sub-clause 20.5.3.2 [swappable.requirements] is as a guide to
the "right" semantics to provide for a non-member swap function (called in
the context defined by 20.5.3.2 [swappable.requirements] p3) in order to provide interoperable
user-defined types for generic programming. The standard container types don't follow these guidelines.

More generally, the design in the standard represents a classic example of "contract narrowing". It
is entirely reasonable for the contract of a particular swap overload to provide more
guarantees, such as constant-time execution and noexcept, than are provided by the swap
that is provided for any MoveConstructible and MoveAssignable types, but it is not
reasonable for such an overload to fail to live up to the guarantees it provides for general types when
it is applied to more specific types. Such an overload or specialization in generic programming is akin
to an override of an inherited virtual function in OO programming: violating a superclass contract in a
subclass may be legal from the point of view of the language, but it is poor design and can easily lead
to errors. While we cannot prevent user code from providing overloads that violate the more general
swap contract, we can avoid doing so within the library itself.

My proposed resolution is to draw a sharp distinction between member swap functions, which provide
optimal performance but idiosyncratic contracts, and non-member swap functions, which should always
fulfill at least the contract of 23.2.2 [utility.swap] and thus render objects swappable. The member
swap for containers with non-propagating allocators, for example, would offer constant-time
guarantees and noexcept but would only offer defined behavior for values with allocators that compare
equal; non-member swap would test allocator equality and then dispatch to either member swap or
std::swap depending on the result, providing defined behavior for all values (and rendering the type
"swappable"), but offering neither the constant-time nor the noexcept guarantees.

[2013-03-15 Issues Teleconference]

Moved to Open.

This topic deserves more attention than can be given in the telocon, and there is no proposed resolution.

[2013-03-15 Issues Teleconference]

Moved to Open.

This topic deserves more attention than can be given in the telocon, and there is no proposed resolution.

Then when the project was compiled by a "new" compiler that implemented bool as defined by the
evolving C++98 or C99 standards, those lines would be skipped; but when compiled by an "old" compiler that
didn't yet provide bool, true, and false, then the #define's would provide a
simulation that worked for most purposes.

It turns out that there is an unfortunate ambiguity in the name. One interpretation is as shown above, but
a different reading says "bool, true, and false are #define'd", i.e. that the meaning of the macro is to
assert that these names are macros (not built-in) ... which is true in C, but not in C++.

In C++11, the name appears in parentheses followed by a stray period, so
some editorial change is needed in any event:

"The contents of these headers are the same as the Standard C library headers <setjmp.h>,
<signal.h>, <stdalign.h>, <stdarg.h>, <stdbool.h>,
<stdlib.h>, and <time.h>, respectively, with the following
changes:",

and para 8 says

"The header <cstdbool> and the header <stdbool.h> shall
not define macros named bool, true, or false."

Thus para 8 doesn't exempt the C++ implementation from the arguably clear requirement of the C standard, to
provide a macro named __bool_true_false_are_defined defined to be 1.

Real implementations of the C++ library differ, so the user cannot count upon any consistency; furthermore, the
usefulness of the transition tool has faded long ago.

That's why my suggestion is that both C and C++ standards should eliminate any mention of
__bool_true_false_are_defined. In that case, the name belongs to implementers to provide, or not, as
they choose.

[2013-03-15 Issues Teleconference]

Moved to Open.

While not strictly necessary, the clean-up look good.

We would like to hear from our C liaison before moving on this issue though.

[2015-05 Lenexa]

LWG agrees. Jonathan provides wording.

[2017-03-04, Kona]

The reference to <cstdbool> in p2 needs to be struck as well. Continue the discussion on the reflector once the DIS is available.

Objects of std::array<T,N> are supposed to be initialized with aggregate initialization (when
not the destination of a copy or move). This clearly works when N is positive. What happens when N
is zero? To continue using an (inner) set of braces for initialization, a std::array<T,0> implementation
must have an array member of at least one element, and let default initialization take care of those secret elements.
This cannot work when T has a set of constructors and the default constructor is deleted from that set.
Solution: Add a new paragraph in 26.3.7.5 [array.zero]:

The unspecified internal structure of array for this case shall allow initializations like:

array<T, 0> a = { };

and said initializations must be valid even when T is not default-constructible.

[2012, Portland: Move to Open]

Some discussion to understand the issue, which is that implementations currently have freedom to implement
an empty array by holding a dummy element, and so might not support value initialization, which is
surprising when trying to construct an empty container. However, this is not mandated, it is an unspecified
implementation detail.

Jeffrey points out that the implication of 26.3.7.1 [array.overview] is that this initialization syntax
must be supported by empty array objects already. This is a surprising inference that was not
obvious to the room, but consensus is that the reading is accurate, so the proposed resolution is not necessary,
although the increased clarity may be useful.

Further observation is that the same clause effectively implies that T must always be DefaultConstructible,
regardless of N for the same reasons - as an initializer-list may not supply enough values, and the
remaining elements must all be value initialized.

Concern that we are dancing angels on the head of pin, and that relying on such subtle implications in wording is
not helpful. We need a clarification of the text in this area, and await wording.

[2015-02 Cologne]

DK: What was the outcome of Portland? AM: Initially we thought we already had the intended behaviour.
We concluded that T must always be DefaultConstructible, but I'm not sure why. GR: It's p2 in
std::array, "up to N". AM: That wording already implies that "{}" has to work when N
is zero. But the wording of p2 needs to be fixed to make clear that it does not imply that T must be
DefaultConstructible.

Conclusion: Update wording, revisit later.

[2015-10, Kona Saturday afternoon]

MC: How important is this? Can you not just use default construction for empty arrays?

TK: It needs to degenerate properly from a pack. STL agrees.

JW: Yes, this is important, and we have to make it work.

MC: I hate the words "initialization like".

JW: I'll reword this.

WEB: Can I ask that once JW has reworded this we move it to Review rather than Open?

MC: We'll try to review it in a telecon and hopefully get it to tentatively ready.

STL: Double braces must also work: array<T, 0> a = {{}};.

Jonathan to reword.

Proposed resolution:

This wording is relative to N3376.

Add the following new paragraph between the current 26.3.7.5 [array.zero] p1 and p2:

There are various operations on std::vector that can cause elements of the vector to be
moved from one location to another. A move operation can use either rvalue or const lvalue as
argument; the choice depends on the value of !is_nothrow_move_constructible<T>::value &&
is_copy_constructible<T>::value, where T is the element type. Thus, some operations
on std::vector (e.g. 'resize' with single parameter, 'reserve', 'emplace_back') should have
conditional requirements. For example, let's consider the requirement for 'reserve' in N3376 –
26.3.11.3 [vector.capacity]/2:

Requires: T shall be MoveInsertable into *this.

This requirement is not sufficient if an implementation is free to select copy constructor when
!is_nothrow_move_constructible<T>::value && is_copy_constructible<T>::value
evaluates to true. Unfortunately, is_copy_constructible cannot reliably determine whether
T is really copy-constructible. A class may contain public non-deleted copy constructor whose
definition does not exist or cannot be instantiated successfully (e.g.,
std::vector<std::unique_ptr<int>> has copy constructor, but this type is not
copy-constructible). Thus, the actual requirements should be:

if !is_nothrow_move_constructible<T>::value && is_copy_constructible<T>::value
then T shall be CopyInsertable into *this;

otherwise T shall be MoveInsertable into *this.

Maybe it would be useful to introduce a new name for such conditional requirement (in addition to
"CopyInsertable" and "MoveInsertable").

[2016-08 Chicago]

The problem does not appear to be as severe as described. The MoveInsertable
requirements are consistently correct, but an issue may arise on the
exception-safety guarantees when we check for
is_copy_constructible_v<T>. The problem, as described, is
typically for templates that appear to have a copy constructor, but one that
fails to compile once instantiated, and so gives a misleading result for the
trait.

In general, users should not provide such types, and the standard would not
serve users well by trying to address support for such types. However, the
standard should not be providing such types either, such as
vector<unique_ptr<T>>. A possible resolution would be
to tighten the constraints in Table 80 — Container Requirements, so that if
the Requirements for the copy constructor/assingment operator of a container
are not satisfied, that operation shall be deleted.

A futher problem highlighted by this approach is that there are no constraints on
the copy-assignment operator, so that vector<unique_ptr<T>>
should be CopyAssignable! However, we can lift the equivalent constraints from
the Allocator-aware container requirements.

[08-2016, Chicago]

Fri PM: Move to Open

[2017-11 Albuquerque Saturday issues processing]

There's a bunch of uses of "shall" here that are incorrect. Also, CopyInsertable contains some semantic requirements, which can't be checked at compile time, so 'ill-formed' is not possible for detecting that.

I think that an implementation of vector's 'emplace' should initialize an intermediate object with
v.back() before any shifts take place, then perform all necessary shifts and finally replace the
value pointed to by v.begin() with the value of the intermediate object. So, I would expect the
following output:

3
1
2
3

GNU C++ 4.7.1 and GNU C++ 4.8.0 produce other results:

2
1
2
3

Howard Hinnant:

I believe Nikolay is correct that vector should initialize an intermediate object with v.back()
before any shifts take place. As Nikolay pointed out in another email, this appears to be the only way to
satisfy the strong exception guarantee when an exception is not thrown by T's copy constructor,
move constructor, copy assignment operator, or move assignment operator as specified by
26.3.11.5 [vector.modifiers]/p1. I.e. if the emplace construction throws, the vector must remain unaltered.

That leads to an implementation that tolerates objects bound to the function parameter pack of the emplace
member function may be elements or sub-objects of elements of the container.

My position is that the standard is correct as written, but needs a clarification in this area. Self-referencing
emplace should be legal and give the result Nikolay expects. The proposed resolution of LWG 760
is not correct.

[2015-02 Cologne]

LWG agrees with the analysis including the assessment of LWG 760 and would appreciate a concrete wording proposal.

[2015-04-07 dyp comments]

The Standard currently does not require that creation of such
intermediate objects is legal. 26.2.3 [sequence.reqmts] Table 100
— "Sequence container requirements" currently specifies:

Table 100 — Sequence container requirements

Expression

Return type

Assertion/notepre-/post-condition

…

a.emplace(p, args);

iterator

Requires: T is EmplaceConstructible into
X from args. For vector and deque,
T is also MoveInsertable into X and
MoveAssignable. […]

…

The EmplaceConstructible concept is defined via
allocator_traits<A>::construct in 26.2.1 [container.requirements.general] p15.5 That's surprising to me
since the related concepts use the suffix Insertable if they
refer to the allocator. An additional requirement such as
std::is_constructible<T, Args...> is necessary to allow
creation of intermediate objects.

The creation of intermediate objects also affects other functions, such
as vector.insert. Since aliasing the vector is only allowed for
the single-element forms of insert and emplace (see
526), the range-forms are not affected. Similarly,
aliasing is not allowed for the rvalue-reference overload. See also LWG
2266.

There might be a problem with a requirement of
std::is_constructible<T, Args...> related to the issues
described in LWG 2461. For example, a scoped allocator
adapter passes additional arguments to the constructor of the value
type. This is currently not done in recent implementations of libstdc++
and libc++ when creating the intermediate objects, they simply create
the intermediate object by perfectly forwarding the arguments. If such
an intermediate object is then moved to its final destination in the
vector, a change of the allocator instance might be required —
potentially leading to an expensive copy. One can also imagine worse
problems, such as run-time errors (allocators not comparing equal at
run-time) or compile-time errors (if the value type cannot be created
without the additional arguments). I have not looked in detail into this
issue, but I'd be reluctant adding a requirement such as
std::is_constructible<T, Args...> without further
investigation.

It should be noted that the creation of intermediate objects currently
is inconsistent in libstdc++ vs libc++. For example, libstdc++ creates
an intermediate object for vector.insert, but not
vector.emplace, whereas libc++ does the exact opposite in this
respect.

A live demo of the inconsistent creation of intermediate objects can be
found here.

[2015-10, Kona Saturday afternoon]

HH: If it were easy, it'd have wording. Over the decades I have flipped 180 degrees on this. My current position is that it should work even if the element is in the same container.

TK: What's the implentation status? JW: Broken in GCC. STL: Broken in MSVS. Users complain about this every year.

MC: 526 says push_back has to work.

HH: I think you have to make a copy of the element anyway for reasons of exception safety. [Discussion of exception guarantees]

STL: vector has strong exception guarantees. Could we not just provide the Basic guarantee here.

HH: It would terrify me to relax that guarantee. It'd be an ugly, imperceptible runtime error.

HH: I agree if we had a clean slate that strong exception safety is costing us here, and we shouldn't provide it if it costs us.

STL: I have a mail here, "how can vector provide the strong guarantee when inserting in the middle".

HH: The crucial point is that you only get the strong guarantee if the exception is thrown by something other than the copy and move operations that are used to make the hole.

STL: I think we need to clean up the wording. But it does mandate currently that the self-emplacement must work, because nothings says that you can't do it. TK clarifies that a) self-emplacement must work, and b) you get the strong guarantee only if the operations for making the hole don't throw, otherwise basic. HH agrees. STL wants this to be clear in the Standard.

STL: Should it work for deque, too? HH: Yes.

HH: I will attempt wording for this.

TK: Maybe mail this to the reflector, and maybe someone has a good idea?

JW: I will definitely not come up with anything better, but I can critique wording.

Moved to Open; Howard to provide wording, with feedback from Jonathan.

[2017-01-25, Howard suggests wording]

[2018-1-26 issues processing telecon]

Status to 'Tentatively Ready' after adding a period to Howard's wording.

Requires: T is EmplaceConstructible into X
from args. For vector and deque, T is alsoMoveInsertable into X and MoveAssignable.Effects: Inserts an object of type T
constructed withstd::forward<Args>(args)... before p.[Note:args may directly or indirectly refer to a value
in a. — end note]

2173(i). The meaning of operator + in the description of the algorithms

The specification for output iterators is somewhat tricky, because here a sequence of increments is required to
be combined with intervening assignments to the dereferenced iterator. I tried to respect this
fact by using a conceptual assignment operation as part of the specification.

Another problem in the provided as-if-code is the question which requirements are imposed on n. Unfortunately,
the corresponding function advance is completely underspecified in this regard, so I couldn't borrow wording
from it. We cannot even assume here that n is the difference type of the iterator, because for output iterators there is
no requirements for this associated type to be defined. The presented wording attempts to minimize assumptions, but still
can be considered as controversial.

-12- In the description of the algorithms operators + and - are used for some of the iterator categories for which
they do not have to be defined. In these cases the semantics of a+n is the same as that of

X tmp = a;
advance(tmp, n);
return tmp;

when X meets the input iterator requirements (27.2.3 [input.iterators]), otherwise it is the same as that of

The second and third constructors do no have an Allocator argument, so despite the "all match_results
constructors", it is not possible to use "the Allocator argument" for the second and third constructors.

The requirements for those two constructors also does not give any guidance. The second constructor has no language
about allocators, and the third states that the stored Allocator value is move constructed from
m.get_allocator(), but doesn't require using that allocator to allocate memory.

The same basic problem recurs in 31.10.6 [re.results.all], which gives the required return value for
get_allocator():

Returns: A copy of the Allocator that was passed to the object's constructor or, if that allocator
has been replaced, a copy of the most recent replacement.

Again, the second and third constructors do not take an Allocator, so there is nothing that meets this
requirement when those constructors are used.

The effects of the two assignment operators are specified in Table 141. Table 141 makes no mention of allocators,
so, presumably, they don't touch the target object's allocator. That's okay, but it leaves the question:
match_results::get_allocator() is supposed to return "A copy of the Allocator that was passed to the
object's constructor or, if that allocator has been replaced, a copy of the most recent replacement"; if assignment
doesn't replace the allocator, how can the allocator be replaced?

The hash functor and key-comparison functor of unordered containers are allowed to throw on swap.

26.2.7.1 [unord.req.except]p3 "For unordered associative containers, no swap function throws
an exception unless that exception is thrown by the swap of the container's Hash or Pred object (if any)."

In such a case we must offer the basic exception safety guarantee, where both objects are left in valid
but unspecified states, and no resources are leaked. This yields a corrupt, un-usable container if the
first swap succeeds, but the second fails by throwing, as the functors form a matched pair.

So our basic scenario is first, swap the allocators if the allocators propagate on swap, according to
allocator_traits. Next we swap the pointers to our internal hash table data structures, so that
they match the allocators that allocated them. (Typically, this operation cannot throw). Now our containers
are back in a safely destructible state if an exception follows.

Next, let's say we swap the hash functor, and that throws. We have a corrupt data structure, in that the
buckets are not correctly indexed by the correct functors, lookups will give unpredicatable results etc.
We can safely restore a usable state by forcibly clearing each container - which does not leak resources
and leaves us with two (empty but) usable containers.

Now let us assume that the hasher swap succeeds. Next we swap the equality comparator functor, and this
too could throw. The important point to bear in mind is that these two functors form an important pairing
- two objects that compare equal by the equality functor must also hash to the same value. If we swap
one without the other, we most likely leave the container in an unusable state, even if we clear out all
elements.

1. A colleague pointed out that the solution for this is to dynamically allocate the two functors, and then
we need only swap pointers, which is not a throwing operation. And if we don't want to allocate on default
construction (a common QoI request), we might consider moving to a dynamically allocated functors whenever
swap is called, or on first insertion. Of course, allocating memory in swap is a whole
new can of worms, but this does not really sound like the design we had intended.

2. The simplest option is to say that we do not support hasher or equality functors that throw on ADL
swap. Note that the requirement is simply to not throw, rather than to be explicitly
marked as noexcept. Throwing functors are allowed, so long as we never use values that
would actually manifest a throw when used in an unordered container.

Pablo went on to give me several more options, to be sure we have a full set to consider:

3. Disallow one or the other functor from throwing. In that case, the
possibly-throwing functor must be swapped first, then the other functor,
the allocator, and the data pointer(s) afterwards (in any order -- there
was a TC that allocator assignment and swap may not throw if the
corresponding propagation trait is true.). Of course, the question
becomes: which functor is allowed to throw and which one is not?

4. Require that any successful functor swap be reliably reversible.
This is very inventive. I know of no other place in the standard where
such a requirement is stated, though I have occasionally wanted such a
guarantee.

5. Allow a failed swap to leave the containers in a state where future
insertions may fail for reasons other than is currently allowed.
Specifically, if the hash and equality functors are out of sync, all
insertions will fail. Presumably some "incompletely swapped" exception
would be thrown. This is "slightly" inventive, although people have been
discussing "radioactive" states for a while.

31.10.1 [re.results.const]/3: "Move-constructs an object of class match_results satisfying the same
postconditions as Table 141."

Table 141 lists various member functions and says that their results should be the results of the corresponding member
function calls on m. But m has been moved from, so the actual requirement ought to be based on the
value that m had before the move construction, not on m itself.

In addition to that, the requirements for the copy constructor should refer to Table 141.

Ganesh:

Also, the requirements for move-assignment should refer to Table 141. Further it seems as if in Table 141 all phrases of
"for all integers n < m.size()" should be replaced by "for all unsigned integers
n < m.size()".

The class template match_results shall satisfy the requirements of an allocator-aware container and of a
sequence container, as specified in 26.2.3 [sequence.reqmts], except that only operations defined for
const-qualified sequence containers are supported.

can be read to require the existence of the described constructors from as well, but they do not exist in the
synopsis.

It should be clarified, whether (a) constructors are an exception of above mentioned operations or (b) whether
at least some of them (like those accepting a match_results value and an allocator) should be added.

As visible in several places of the standard (including the core language), constructors seem usually to be considered
as "operations" and they certainly can be invoked for const-qualified objects.

The below given proposed resolution applies only the minimum necessary fix, i.e. it excludes constructors from
above requirement.

[2013-04-20, Bristol]

Check current implementations to see what they do and, possibly, write a paper.

[2013-09 Chicago]

Ask Daniel to update the proposed wording to include the allocator copy and move constructors.

The class template match_results shall satisfy the requirements of an allocator-aware container and of a
sequence container, as specified in 26.2.3 [sequence.reqmts], except that only operations defined for
const-qualified sequence containers that are not constructors are supported.

[2015-05-06 Lenexa]

MC passes important knowledge to EF.

VV, RP: Looks good.

TK: Second form should be conditionally noexcept

JY: Sequence constructors are not here, but mentioned in the issue writeup. Why?

TK: That would have been fixed by the superseded wording.

JW: How does this interact with Mike Spertus' allocator-aware regexes? [...] Perhaps it doesn't.

JW: Can't create match_results, want both old and new resolution.

JY: It's problematic that users can't create these, but not this issue.

Change 31.10.1 [re.results.const] as indicated: [Drafting note: Paragraph 6 as currently written,
makes not much sense, because the noexcept does not allow any exception to propagate. Further-on, the allocator requirements
do not allow for throwing move constructors. Deleting it seems to be near to editorial — end drafting note]

-5- Effects: Move-constructs an object of class match_results from m satisfying the same postconditions
as Table 142. AdditionallyFor the first form, the stored Allocator value is move constructed
from m.get_allocator().

The user cannot specify a max_load_factor for their unordered container
at construction, it must be supplied after the event, when the container is
potentially not empty. The contract for this method is deliberately vague, not
guaranteeing to use the value supplied by the user, and any value actually used
will be used as a ceiling that the container will attempt to respect.

The only guarantee we have is that, if user requests a max_load_factor
that is less than the current load_factor, then the operation will take
constant time, thus outlawing an implementation that chooses to rehash and so
preserve as a class invariant that load_factor < max_load_factor.

Reasonable options conforming to the standard include ignoring the user's request
if the requested value is too low, or deferring the rehash to the next insert
operation and allowing the container to have a strange state (wrt max_load_factor)
until then - and there is still the question of rehashing if the next insert
is for a duplicate key in a unique container.

Given the deliberate vagueness of the current wording, to support a range of reasonable
(but not perfect) behaviors, it is not clear why the equally reasonable rehash
to restore the constraint should be outlawed. It is not thought that this is a performance
critical operation, where users will be repeatedly setting low load factors on populated
containers, in a tight or (less unlikely) an instant response scenario.

[2013-03-15 Issues Teleconference]

Moved to Open.

Alisdair to provide wording.

[2016-11-12, Issaquah]

Sat PM: Howard to provide wording

[2016-11-17 Howard provided wording.]

The provided wording is consistent with LWG discussion in Issaquah. An implementation
of the proposed wording would be setting max_load_factor() to
max(z, load_factor()). This preserves the container invariant:

load_factor() <= max_load_factor()

And it preserves the existing behavior that no rehash is done by this operation.

If it is desired to change the max_load_factor() to something smaller than
the current load_factor() that can be done by first reducing the
current load_factor() by either increasing bucket_count() (via
rehash or reserve), or decreasing size() (e.g.
erase), and then changing max_load_factor().

This resolution reaffirms that load_factor() <= max_load_factor() is a
container invariant which can never be violated.

[2016-11-27, Nico comments]

Current implementations behave differently.

In regard to the sentence

"The only guarantee we have is that, if user requests a max_load_factor
that is less than the current load_factor, then the operation will take
constant time, thus outlawing an implementation that chooses to rehash
and so preserve as a class invariant that load_factor < max_load_factor."

Note that the current spec says that there is constant complexity
without any precondition. So, rehashing to keep the invariant would
violate the spec (which is probably not be the intention).

promise, packaged_task, and async are the only
places where a shared state is actually supposed to be allocated. Accordingly,
promise and packaged_task are "allocator-aware". But
function template async provides no way to provide an allocator.

[2013-09 Chicago]

Matt: deprecate async

Nico: read my paper

Alisdair: defer issues to wait for polymorphic allocators

Alisdair: defer, active topic of research Deferred

[2014-02-20 Re-open Deferred issues as Priority 4]

[2015-05 Lenexa, SG1 response]

We want whatever status approximates: "will not fix; we're working on a replacement facility and don't want to add features to a broken one"

In 26.2.3 [sequence.reqmts] p3, we have "il designates an object of type
initializer_list<value_type>", and then several functions that take
'il' as an argument. However, an expression like {1, 2, 'a'} is not
an object of type initializer_list<int> unless it's used to initialize
an explicitly-typed variable of that type. I believe we want:

There is an ambiguity in how std::basic_ios::init method (30.5.5.2 [basic.ios.cons])
can be used in the derived class. The Standard only specify the state of the basic_ios
object after the call completes. However, in basic_ios default constructor description
(30.5.5.2 [basic.ios.cons]) there is this sentence:

Effects: Constructs an object of class basic_ios (30.5.3.7 [ios.base.cons])
leaving its member objects uninitialized. The object shall be initialized by calling basic_ios::init
before its first use or before it is destroyed, whichever comes first; otherwise the behavior is undefined.

This restriction hints that basic_ios::init should be called exactly
once before the object can be used or destroyed, because basic_ios::init
may not know whether it was called before or not (i.e. whether its members are actually
uninitialized or are initialized by the previous call to basic_ios::init). There
is no such restriction in the basic_ios::init preconditions so it is not clear whether it is
allowed to call basic_ios::init multiple times or not.

This problem has already affected publicly available implementations.
For example, Microsoft Visual C++ STL introduces a memory leak if
basic_ios::init is called multiple times, while GCC 4.7 and STLPort
reinitialize the basic_ios object correctly without memory leak or any
other undesired effects. There was a discussion of this issue on Boost
developers mailing list,
and there is a test case
that reproduces the problem. The test case is actually a bug report for my Boost.Log library,
which attempts to cache basic_ostream-derived objects internally to avoid expensive construction
and destruction. My stream objects allowed resetting the stream buffer pointers the stream
is attached to, without requiring to destroy and construct the stream.

My personal view of the problem and proposed resolution follows.

While apparently the intent of basic_ios::init is to provide a way to
initialize basic_ios after default construction, I see no reason to
forbid it from being called multiple times to reinitialize the stream.
Furthermore, it is possible to implement a conforming basic_ios that
does not have this restriction.

The quoted above section of the Standard that describes the effects of
the default constructor is misleading. The Standard does not mandate
any data members of basic_ios or ios_base (30.5.3 [ios.base]), which
it derives from. This means that the implementation is allowed to use
non-POD data members with default constructors that initialize the
members with particular default values. For example, in the case of
Microsoft Visual C++ STL the leaked memory is an std::locale instance
that is dynamically allocated during basic_ios::init, a raw pointer to
which is stored within ios_base. It is possible to store e.g. an
unique_ptr instead of a raw pointer as a member of ios_base, the smart
pointer will default initialize the underlying raw pointer on default
construction and automatically destroy the allocated object upon being
reset or destroyed, which would eliminate the leak and allow
basic_ios::init to be called multiple times. This leads to conclusion
that the default constructor of basic_ios cannot leave "its member
objects uninitialized" but instead performs default initialization of
the member objects, which would mean the same thing in case of POD types.

However, I feel that restricting ios_base and basic_ios members to
non-POD types is not acceptable. Since multiple calls to basic_ios::init are
not forbidden by the Standard, I propose to correct the basic_ios default
constructor description so that it is allowed to destroy basic_ios object
without calling basic_ios::init. This would imply that any raw members of
basic_ios and ios_base should be initialized to values suitable for
destruction (essentially, this means only initializing raw pointers to NULL). The new
wording could look like this:

Effects: Constructs an object of class basic_ios (30.5.3.7 [ios.base.cons])
initializing its member objects to unspecified state, only suitable for basic_ios destruction.
The object shall be initialized by calling basic_ios::init before its first use; otherwise
the behavior is undefined.

This would remove the hint that basic_ios::init must be called exactly
once. Also, this would remove the requirement for basic_ios::init to
be called at all before the destruction. This is also an important issue because
the derived stream constructor may throw an exception before it manages to call
basic_ios::init (for example, if the streambuf constructor throws), and
in this case the basic_ios destructor has undefined behavior.

To my mind, the described modification is sufficient to resolve the issue. But to
emphasize the possibility to call basic_ios::init multiple times, a remark
or a footnote for basic_ios::init postconditions could be added to explicitly
state the semantics of calling it multiple times. The note could read as follows:

The function can be called multiple times during the object lifetime. Each subsequent
call reinitializes the object to the described in postconditions initial state.

[2013-04-20, Bristol]

Alisdair: The current wording is unclear but the proposed resolution is wrong

Solution: Clarify that init must be called once and only once. Move then to review.

-2- Effects: Constructs an object of class basic_ios (30.5.3.7 [ios.base.cons])
leaving its member objects uninitializedinitializing its member objects to unspecified state,
only suitable for basic_ios destruction. The object shall be initialized by calling
basic_ios::init before its first use or before it is destroyed, whichever comes first;
otherwise the behavior is undefined.

void init(basic_streambuf<charT,traits>* sb);

Postconditions: The postconditions of this function are indicated in Table 128.

-?- Remarks: The function can be called multiple times during the object lifetime. Each subsequent
call reinitializes the object to the described in postconditions initial state.

The requirements on the functors used to arrange elements in the various associative and
unordered containers are given by a set of expressions in tables 102 — Associative container
requirements, and 103 — Unordered associative container requirements. In keeping with Library
convention these expressions make the minimal requirements necessary on their types. For
example, we have the following 3 row extracts for the unordered containers:

Expression

Assertion/note pre-/post-condition

X(n, hf, eq)X a(n, hf, eq)

Requires:hasher and key_equal are CopyConstructible.

X(n, hf)X a(n, hf)

Requires:hasher is CopyConstructible and
key_equal is DefaultConstructible.

X(n)X a(n)

Requires:hasher and key_equal are DefaultConstructible.

However, the signature for each class template requires that the functors must effectively be
CopyConstructible for each of these expressions:

The letter of the standard can be honored as long as implementors recognize
their freedom to split this one signature into multiple overloads, so that
the documented default arguments (requiring a CopyConstructible functor)
are not actually passed as default arguments.

As we look into the requirements for the copy constructor and copy-assignment
operator, the requirements are even more vague, as the explicit requirements on
the functors are not called out, other than saying that the functors are copied.

Must the functors be CopyAssignable? Or is CopyConstructible
sufficient in this case? Do we require that the functors be Swappable
so that the copy-swap idiom can be deployed here? Note that a type that is both
CopyConstructible and CopyAssignable is still not guaranteed to
be Swappable as the user may delete the swap function for their
type in their own namespace, which would be found via ADL.

Some clean-up of the requirements table looks necessary, to at least document the
assignment behavior. In addition, we should have clear guidance on whether these
functors should always be CopyConstructible, as suggested by the class
template definitions, or if the requirement tables are correct and we should
explicitly split up the constructors in the (unordered) associative containers
to no longer use default (function) arguments to obtain their defaulted functors.

I recommend the simplest solution would be to always require that the functors
for (unordered) associative containers be CopyConstructible, above the
requirements tables themselves, so that the issue need not be addressed within
the tables. I suggest that the assignment operators for these containers add
the requirement that the functors be Swappable, rather than forwarding
the corresponding Assignable requirement.

The revised resolution of LWG 2227 should resolve this issue as well. It follows the recommendations
of the submitter to require CopyConstructible requirements for the function objects owned by containers,
but it does not impose any further fundamental requirements.

This appears to require the result to have a default-constructed
allocator, which isn't even possible for all allocator types. I
suspect the allocator should be copied from 's' instead. Possibly
there should be an additional defaulted argument to override the
allocator of the result.

31.12.2.2 [re.tokiter.comp] p1 says that it0.operator==(it1) returns true "if
*this and right are both suffix iterators and suffix == right.suffix"; both
conditions are satisfied in this example. It does not say that they must both be iterators
into the same sequence, nor does it say (as general iterator requirements do) that they must
both be in the domain of == in order for the comparison to be meaningful. It's a
simple statement: they're equal if the strings they point at compare equal. Given this being
a valid comparison, the obtained result of "true" looks odd.

The problem is that for iterator values prior to the suffix iterator, equality means the same
regular expression and the same matched sequence (both uses of "same" refer to identity, not equality);
for the suffix iterator, equality means that the matched sequences compare equal.

The wstring_convert class template, described in D.18.1 [depr.conversions.string], does not
support custom stateful allocators. It only supports custom stateless allocators.

The to_bytes member function returns basic_string<char, char_traits<char>, Byte_alloc>
but it does not take an instance of Byte_alloc to pass to the constructor of the basic_string.

Similarly the from_bytes member function returns basic_string<Elem, char_traits<Elem>, Wide_alloc>
but it does not take an instance of Wide_alloc to pass to the constructor of the basic_string.

This makes these two member functions and the wstring_convert class template not usable when Wide_alloc
or Byte_alloc are stateful allocators.

[2013-01-22, Glen provides wording]

[2013-03-15 Issues Teleconference]

Moved to NAD Future.

This is clearly an extension that the LEWG may want to take a look at, once we have more experience
with appropriate use of allocators with the C++11 model.

Proposed resolution:

This wording is relative to N3485.

In D.18.1 [depr.conversions.string]/2 and /6 "Class template wstring_convert synopsis" change the overloads
of the member function from_bytes() so that all four overloads take an additional parameter
which is an instance of Wide_alloc:

In D.18.1 [depr.conversions.string] /8 specify that this Wide_alloc allocator parameter is used to
construct the wide_string object returned from the function:

-7- Effects: The first member function shall convert the single-element sequence byte to a wide string.
The second member function shall convert the null-terminated sequence beginning at ptr to a wide
string. The third member function shall convert the sequence stored in str to a wide string. The fourth
member function shall convert the sequence defined by the range [first, last) to a wide string.

-8- In all cases:

If the cvtstate object was not constructed with an explicit value, it shall be set to its default value
(the initial conversion state) before the conversion begins. Otherwise it shall be left unchanged.

The number of input elements successfully converted shall be stored in cvtcount.

The Wide_alloc allocator parameter is used to construct the wide_string object returned
from the function.

In D.18.1 [depr.conversions.string]/2 and /12 "Class template wstring_convert synopsis" change the overloads
of the member function to_bytes() so that all four overloads take an additional parameter
which is an instance of Byte_alloc:

In D.18.1 [depr.conversions.string] /13 specify that this Byte_alloc allocator parameter is used to
construct the byte_string object returned from the function:

-12- Effects: The first member function shall convert the single-element sequence wchar to a byte string.
The second member function shall convert the null-terminated sequence beginning at wptr to a byte
string. The third member function shall convert the sequence stored in wstr to a byte string. The
fourth member function shall convert the sequence defined by the range [first, last) to a byte string.

-13- In all cases:

If the cvtstate object was not constructed with an explicit value, it shall be set to its default value
(the initial conversion state) before the conversion begins. Otherwise it shall be left unchanged.

The number of input elements successfully converted shall be stored in cvtcount.

The Byte_alloc allocator parameter is used to construct the byte_string object returned
from the function.

Table 102 in 26.2.6 [associative.reqmts]/8 states on expression a.key_comp() that it
"returns the comparison object out of which a was constructed". At the same time,
26.2.1 [container.requirements.general]/8 states (starting in the third line) that
"...Any Compare, Pred, or Hash objects belonging to a and b
shall be swappable and shall be exchanged by unqualified calls to non-member swap...". This is
problematic for any compliant implementation, since once swapped the container cannot return the comparison
object out of which it was constructed unless incurring in storing an otherwise needless object.

The simple solution is to correct that statement in Table 102, but I believe this is part of a larger problem
of underspecified behavior: The new standard has made an effort in regards to allocators and now fully
specifies what happens to stateful allocator objects. It has even specified what happens to stateful hasher
and key_equal members of unordered containers (they propagate), but it says nothing about stateful
comparison objects of (ordered) associative containers, except for the statement in
26.2.1 [container.requirements.general]/8 referred above and only related to swap.

For example, it is unclear to me what is specified to happen on an assignment: should the comparison object
be copied/moved along with the elements, or should the left-hand side object keep its own?
Maybe this has been intentionally left unspecified with the purpose of compatibility with C++98, which I
understand it specified that comparison objects were kept for the entire life of the container (like allocators)
— an unfortunate choice. But anyway, the segment of 26.2.1 [container.requirements.general] quoted
above seems to break any possible backwards compatibility with C++98 in this regard.

Therefore, taking into consideration consistency with how this is dealed with for unordered associative
containers, I propose that Table 102 is modified as follows:

The row for expression a.key_comp() is changed so that its "assertion/note pre-/post-condition" reads
"Returns a's comparison object."

A new row is added at the appropriate location (which I believe would be after "X(il)" row), with:

Copy constructor. In addition to
the requirements of Table 96, copies
the comparison object.

Linear in b.size()

a = b

X&

Copy assignment operator. In addition to
the requirements of Table 96, copies the
comparison object.

Linear in a.size() and b.size()

[2013-03-15 Issues Teleconference]

Moved to Review.

[2013-04-18, Bristol]

STL: can't believe we don't specify this already. this is totally necessary

Alisdair: how does it do this? copy construction? assignment?

Also need it for move.

STL: we already specify this for constructing from a comparator, not during copy construction though.

Jonathan: don't like wording, should say "key_compare is CopyConstructible. Uses b.key_comp()
as a comparison object."

STL: we get it right for unordered!

Jonathan: can't wordsmith this now, but I think implementations do the right thing.

Alisdair: not sure what right thing is for moves. Also we say nothing about propagating allocators to functors.

Moved to Open.

[2015-02 Cologne]

TK: There's no need for fine-grained propagate/not-propagate control. If you don't want to propagate the predicate, you can
simply construct or insert from an iterator range.

VV: libstdc++ already implements the resolution of this issue.

GR: There are a couple of other problems. We don't specify move constructor and move assignment for maps. Those are just general.

TK: General container requirements already describe the semantics for {copy,move}-{construction,assignment}, so it doesn't
seem that there's room for choice in std::map assignments. unordered_map is different, though.

[Note: Check what general container requirements say about container equality.]

DK will draft wording. The decision is to unambiguously make all {copy,move}-{construction,assignment} operations endow the
LHS with the exact state of the RHS, including all predicates and hash function states.

Copy constructor. In addition to
the requirements of Table 96, copies
the comparison object.

Linear in b.size()

a = b

X&

Copy assignment operator. In addition to
the requirements of Table 96, copies the
comparison object.

Linear in a.size() and b.size()

…

a.key_comp()

X::key_compare

rReturns thea's comparison objectout of which a was constructed.

constant

[2015-10-19 Daniel comments and provides alternative wording]

The current standard is especially unclear in regard to what effects move operations of unordered/associative
containers should have. We have one example that is standardized exactly in this way by looking at
26.6.5.2 [priqueue.cons.alloc] p7:

-7- Effects: Initializes c with std::move(q.c) as the first argument and a as
the second argument, and initializes comp with std::move(q.comp)

A similarly comparable example are the move-operations of std::unique_ptr in regard to the deleter
(when this is no a reference), which also respect move-capabilities of that function object.

We have wording from C++98 for associative containers (but not for unordered containers!) that was never
adjusted to C++11 move-semantics in 26.2.6 [associative.reqmts] p12:

When an associative container is constructed by passing a comparison object the container shall not store
a pointer or reference to the passed object, even if that object is passed by reference. When an associative
container is copied, either through a copy constructor or an assignment operator, the target container shall
then use the comparison object from the container being copied, as if that comparison object had been
passed to the target container in its constructor.

The second sentence of this wording is problematic for several reasons:

It only talks about copy operations, not about move operations, except that the term "assignment" without
leading "copy" is a bit ambigious (albeit it seems clear in the complete context).

It is not really clear how to interpret "as if that comparison object had been
passed to the target container in its constructor" for an assignment operation. A possible but not conclusive
interpretation could be that this is wording supporting a "copy-via-swap" idiom.

There does not exist similar wording for unordered containers, except that Table 102 provides entries for
copy construction and copy assignment of the containers whose wording just talks of "copies" in either case.

Existing implementations differ already:

Visual Studio 2015 uses copy construction and copy assignment for the two copy operations but uses swap operations
for the move operations.

GCC's libstdc++ performs copy construction and copy assignment for the two copy operations and for the two
move operations, respectively

In addition the wording also resolves LWG 2215: I believe that the current
wording should require that container function objects should meet the CopyConstructible requirements. Adding
this general requirement also fixes the underspecified requirements of the accessor functions key_comp() and
value_comp().

I don't think that a general requirement for Swappable is needed, only the member swap function currently requires this.
Nonetheless the wording below does support stateful functors that are also moveable or move-assignable,
therefore the specified semantics in terms of move operations.

I should add the following warning, though: If this proposed wording would be accepted, there is a little chance of
code breakage, because the current wording can be read that in general there is no requirement that the
container functors are CopyConstructible. The following code example is accepted by gcc + libstd++:

-8- In Table 101, X denotes an associative container class, a denotes a value of type X,
b denotes a possibly const value of type X, rv denotes a non-const rvalue of
type X,u denotes the name of a variable being declared, […]

Requires:key_compare is CopyConstructible.Effects: Constructs an empty container.
Uses a copy of c as a comparison object.

[…]

…

X(i,j,c)
X u(i,j,c);

Requires:key_compare is CopyConstructible.value_type is EmplaceConstructible into X from *i.Effects: Constructs an empty container and inserts elements
from the range [i, j) into it; uses c as a comparison object.

[…]

…

X(il)

Same as X(il.begin(), il.end()).

same as X(il.begin(), il.end()).

X(b)
X a(b)

(In addition to the requirements of Table 95)Effects: Copy constructs the comparison object of a from
the comparison object of b.

Linear in b.size()

X(rv)
X a(rv)

(In addition to the requirements of Table 95 and Table 98)Effects: Move constructs the comparison object of a from
the comparison object of rv.

constant

a = b

X&

(In addition to the requirements of Table 95 and Table 98)Requires:key_compare is CopyAssignable.Effects: Copy assigns the comparison object of b
to the comparison object of a.

Linear in a.size() and b.size()

a = rv

X&

(In addition to the requirements of Table 95 and Table 98)Requires:key_compare is MoveAssignable.Effects: Move assigns from the comparison object of rv
to the comparison object of a.

-12- When an associative container is constructed by passing a comparison object the container shall not store
a pointer or reference to the passed object, even if that object is passed by reference. When an associative
container is copied, either through a copy constructor or an assignment operator, the target container shall
then use the comparison object from the container being copied, as if that comparison object had been
passed to the target container in its constructor.

Requires: hasher and key_equal are CopyConstructible.value_type is EmplaceConstructible into X from *i.Effects: […]

[…]

X(i, j, n, hf)
X a(i, j, n, hf)

X

Requires: hasher is CopyConstructible andkey_equal is DefaultConstructible.value_type is EmplaceConstructible into X from *i.Effects: […]

[…]

…

X(b)
X a(b)

X

Copy constructor. In addition
to the requirements of Table 95,
copies the hash function,
predicate, and maximum load
factor.(In addition to the requirements of Table 95)Effects: Copy constructs the hash function, predicate, and maximum load factor
of a from the corresponding objects of b.

Average case linear inb.size(),
worst case quadratic.

X(rv)
X a(rv)

X

(In addition to the requirements of Table 95 and Table 98)Effects: Move constructs the hash function, predicate, and maximum load factor
of a from the corresponding objects of rv.

constant

a = b

X&

Copy assignment operator. In
addition to the requirements of
Table 95, copies the hash
function, predicate, and
maximum load factor.(In addition to the requirements of Table 95 and Table 98)Requires:hasher and key_equal are CopyAssignable.Effects: Copy assigns the hash function, predicate, and maximum load factor
of b to the corresponding objects of a.

Average case linear inb.size(),
worst case quadratic.

a = rv

X&

(In addition to the requirements of Table 95 and Table 98)Requires:hasher and key_equal are MoveAssignable.Effects: Move assigns the hash function, predicate, and maximum load factor
from rv to the corresponding objects of a.

Linear

…

[2016-08-07]

Daniel removes the previously proposed wording to work on revised wording.

The "magic" kill_dependency function is a function without any constraints on the template parameter T
and is specified as

template <class T>
T kill_dependency(T y) noexcept;

-14- Effects: The argument does not carry a dependency to the return value (1.10).

-15- Returns: y.

I wonder whether the unconditional noexcept is really intended here:
Assume we have some type U that has a potentially throwing move
constructor (or it has a potentially throwing copy constructor and no
move constructor), for any "normal" function template with the same
signature and the same effects (modulo the dependency magic) this
would mean that it cannot safely be declared noexcept because of the
return statement being part of the complete function call affected by
noexcept (The by-value function argument is irrelevant in this
context). In other words it seems that a function call such as

would be required to call std::terminate if the copy constructor of S throws during the return
of std::kill_dependency.

To require copy elision for this already magic function would look like a low-hanging fruit to solve this problem,
but this case is not covered by current copy elision rules see 12.8 p31 b1:

"— in a return statement in a function with a class return type, when the expression is the name of a non-volatile
automatic object (other than a function or catch-clause parameter) with the same cv-unqualified type as the
function return type, the copy/move operation can be omitted by constructing the automatic object directly into the
function's return value".

Some options come into my mind:

Make the exception-specification a constrained one in regard via std::is_nothrow_move_constructible:

template <class T>
T kill_dependency(T y) noexcept(see below);

This is similar to the approach taken for function templates such as std::swap.

Use perfect forwarding (This needs further wording to correct the effects):

template <class T>
T&& kill_dependency(T&& y) noexcept;

Impose constraints on the template arguments in regard to throwing exceptions while copying/moving.

Keep the state as it is but possibly add a note about a call of std::terminate in above scenario.

A second problem is that the current wording is not clear whether it is well-defined to call the function with
types that are reference types, such as in the following example:

This compiles, but will result in code running amok. The potential trap (that cannot be easily detected by the
library implementation) could be reduced by making this constructor explicit. It would still have the effect to
be selected here, but the code would be ill-formed, so the programmer gets a clear message here.

[2014-06 Rapperswil]

JW: can't fix this, don't want to touch this, Do The Right Thing clause has been a source of tricky issues.
only really happens with string literals, that's the only way to create an array that isn't obviously an array

GR: want to see paper

AM: is it only string literals, or also UDLs?

STL: maybe, but we don't need to deal with that. This is only a problem in a very specific case

In 30.7.4.3 [istream.unformatted] / 34, when describing putback, it says that "rdbuf->sputbackc()"
is called. The problem are not the obvious typos in the expression, but the fact that it may lead to different
interpretations, since nowhere is specified what the required argument to sputbackc is.

It can be guessed to be "rdbuf()->sputbackc(c)", but "rdbuf()->sputbackc(char_type())" or
just anything would be as conforming (or non-conforming) as the first guess.

[2017-12-12, Jonathan comments and provides wording]

Fix the bogus expression, and change sputbackc() to just sputbackc
since we're talking about the function, not an expression sputbackc() (which
isn't a valid expression any more than rdbuf->sputbackc() is). Make the
corresponding change to the equivalent wording in p36 too.

[
2017-12-14 Moved to Tentatively Ready after 6 positive votes on c++std-lib.
]

-34- Effects: Behaves as an unformatted input function (as described above), except that the function
first clears eofbit. After constructing a sentry object, if !good() calls
setstate(failbit) which may throw an exception, and return. If rdbuf() is not null, calls
rdbuf()->sputbackc(c). If rdbuf() is null, or if
sputbackc() returns traits::eof(), calls setstate(badbit) (which may throw
ios_base::failure (30.5.5.4 [iostate.flags])).
[Note: This function extracts no characters, so the value returned by the next call to gcount()
is 0. — end note]

-35- Returns:*this.

basic_istream<charT, traits>& unget();

-36- Effects: Behaves as an unformatted input function (as described above), except that the function
first clears eofbit. After constructing a sentry object, if !good() calls
setstate(failbit) which may throw an exception, and return. If rdbuf() is not null, calls
rdbuf()->sungetc(). If rdbuf() is null, or if sungetc() returns
traits::eof(), calls setstate(badbit) (which may throw ios_base::failure
(30.5.5.4 [iostate.flags])).
[Note: This function extracts no characters, so the value returned by the next call to gcount() is
0. — end note]

[para 4]: "The order of these operations is significant because the call to get_deleter()
may destroy *this."

[para 5]: "The postcondition does not hold if the call to get_deleter() destroys *this since
this->get() is no longer a valid expression."

It seems this wording was created to resolve 998 due to the possibility that a unique_ptr may be
destroyed through deletion of its stored pointer where that directly or indirectly refers to the same unique_ptr.
If unique_ptr is required to support circular references then it seems this must be normative text: an implementation
is currently allowed to operate on *this after the assignment and deletion specified in para 4, since this is only
'disallowed' by the non-normative note.

I propose the following draft rewording:

[para 4]: Effects: assigns p to the stored pointer, and then if the old value of the stored pointer, old_p, was not
equal to nullptr, calls get_deleter()(old_p). No operation shall be performed after the call to
get_deleter()(old_p) that requires *this to be valid, because the deletion may destroy *this if it is
referred to directly or indirectly by the stored pointer.[Note: The order of these operations is significant
because the call to get_deleter() may destroy *this. — end note]

[para 5]: Postconditions: If the call get_deleter()(old_p) destroyed *this, none. Otherwise,get() == p. [Note: The postcondition does not hold if the call to get_deleter()
destroys *this since this->get() is no longer a valid expression. — end note]

I expect it will also be necessary to amend the requirements for a deleter, so in addition:

23.11.1.2 [unique.ptr.single] [para 1]: The default type for the template parameter D is default_delete.
A client-supplied template argument D shall be a function object type (20.10), lvalue-reference to function, or
lvalue-reference to function object type for which, given a value d of type D and a value ptr of type
unique_ptr<T, D>::pointer, the expression d(ptr) is valid and has the effect of disposing of the pointer
as appropriate for that deleter. Where D is not an lvalue reference type, d(ptr) shall be valid if ptr
refers directly or indirectly to the invoking unique_ptr object.

In Chicago, we determined that the original proposed change to 23.11.1.2 [unique.ptr.single]/1 was insufficient, because
d might be a reference to a deleter functor that's destroyed during self-destruction.

We believed that 23.11.1.2.5 [unique.ptr.single.modifiers]/4 was already sufficiently clear. The Standard occasionally prevents
implementations of X from doing various things, through the principle of "nothing allows X to fail in that situation".
For example, v.push_back(v[0]) is required to work for non-empty vectors because nothing allows that to fail. In this case,
the intent to allow self-destruction is already clear.

Additionally, we did not believe that 23.11.1.2.5 [unique.ptr.single.modifiers]/5 had to be changed. The current note is slightly
squirrely but it does not lead to confusion for implementers or users.

The default type for the template parameter D is default_delete.
A client-supplied template argument D shall be a function object type (20.10), lvalue-reference to function, or
lvalue-reference to function object type for which, given a value d of type D and a value ptr of type
unique_ptr<T, D>::pointer, the expression d(ptr) is valid and has the effect of disposing of the pointer
as appropriate for that deleter. Where D is not an lvalue reference type, d(ptr) shall be valid if ptr
refers directly or indirectly to the invoking unique_ptr object.

-3- Requires: The expression get_deleter()(get()) shall be well formed, shall have well-defined behavior,
and shall not throw exceptions.

-4- Effects: assigns p to the stored pointer, and then if the old value of the stored pointer, old_p, was not
equal to nullptr, calls get_deleter()(old_p). No operation shall be performed after the call to
get_deleter()(old_p) that requires *this to be valid, because the deletion may destroy *this if it is
referred to directly or indirectly by the stored pointer.[Note: The order of these operations is significant
because the call to get_deleter() may destroy *this. — end note]

-5- Postconditions:If the call get_deleter()(old_p) destroyed *this, none. Otherwise,get() == p. [Note: The postcondition does not hold if the call to get_deleter()
destroys *this since this->get() is no longer a valid expression. — end note]

The default type for the template parameter D is default_delete.
A client-supplied template argument D shall be a function object type (20.10), lvalue-reference to function, or
lvalue-reference to function object type for which, given a value d of type D and a value ptr of type
unique_ptr<T, D>::pointer, the expression d(ptr) is valid and has the effect of disposing of the pointer
as appropriate for that deleter. d(ptr) shall be valid even if it triggers the destruction of d or (if
D is an lvalue reference to function object type) the function object that d refers to.

[2015-05, Lenexa]

After some discussion in Lenexa there was some wavering on if the added sentence is necessary. Here is example code that
demonstrates why the extra sentence is necessary. In this example the call to d(ptr) is valid, however the deleter
references *this after destructing its element:

The line "The deleter = **destructed**" represents the deleter referencing itself after it has been destructed by the
d(ptr) expression, but prior to that call returning.

Suggested alternative to the current proposed wording:

The expression d(ptr) shall not refer to the object d after it executes ptr->~T().

[2015-07, Telecon]

Geoffrey: Deleter may or may not execute ~T().
Alisdair: After the destructor after the element has run. Say it in words instead of code.
Howard will provide updated wording. Perhaps need both normative and non-normative wording.

The default type for the template parameter D is default_delete.
A client-supplied template argument D shall be a function object type (20.9), lvalue-reference to function, or
lvalue-reference to function object type for which, given a value d of type D and a value ptr of type
unique_ptr<T, D>::pointer, the expression d(ptr) is valid and has the effect of disposing of the pointer
as appropriate for that deleter. The expression d(ptr), if it destructs the object referred to by ptr,
shall not refer to the object d after it destructs *ptr.
[Note: The object being destructed may control the lifetime of d. — end note]

I believe that the following variation on IRIW should admit executions in
which c1 = d1 = 5 and c2 = d2 = 0. If this is allowed, then what is sequence of
program evaluations for 32.4 [atomics.order] p9 that justifies the store to z? It seems that
32.4 [atomics.order] p9 should not allow this execution because one of the stores to x or y has
to appear earlier in the sequence, each of the fetch_adds reads the previous load in the thread (and thus must
appear later in the sequence), and 32.4 [atomics.order] p9 states that each load must read from the last prior
assignment in the sequence.

It seems that the easiest fix is to allow a load in 32.4 [atomics.order] p9 to read from any prior
store in the evaluation order.

That said, I would personally advocate the following:
It seems to me that C/C++ atomics are in a bit of different situation than Java
because:

People are expected to use relaxed C++ atomics in potentially racy
situations, so it isn't clear that semantics as complicated as the JMM's
causality would be sane.

People who use C/C++ atomics are likely to be experts and use them in a
very controlled fashion. I would be really surprised if compilers would find
any real wins by optimizing the use of atomics.

Why not do something like:

There is satisfaction DAG of all program evaluations. Each evaluation
observes the values of variables as computed by some prior assignment in
the DAG.

There is an edge x->y between two evaluations x and y if:

the evaluation y observes a value computed by the evaluation x or

the evaluation y is an atomic store, the evaluation x is an atomic load, and
there is a condition branch c that may depend (intrathread dependence) on x
and x-sb->c and c-sb->y.

This seems to allow reordering of relaxed atomics that processors do without
extra fence instructions, allows most reorderings by the compiler, and gets
rid of satisfaction cycles.

[2015-02 Cologne]

Handed over to SG1.

[2015-05 Lenexa, SG1 response]

This was partially addressed (weasel-worded) in C++14 (See N3786).
The remainder is an open research problem. N3710 outlines a "solution" that doesn't have a consensus behind it because it costs performance. We have no better solution at the moment.

Proposed resolution:

2267(i). partial_sort_copy underspecified for ranges of two different types

(and the usual overload for an explicitly provided comparison function). The standard says nothing about requirements
in the case where the input type (iterator_traits<InputIterator>::value_type) and the output type
(iterator_traits<RandomAccessIterator>::value_type) are different.

Presumably the input type must be convertible to the output type. What's less clear is what the requirements are on
the comparison operator. Does the algorithm only perform comparisons on two values of the output type, or does it also
perform comparisons on values of the input type, or might it even perform heterogeneous comparisons?

It compiles without error on my desktop. Is it required to? I can't find evidence from the standard that it is.
In my test std::copy was found by argument-dependent lookup because the implementation I used made
std::vector<int>::iterator a user-defined type defined in namespace std. But the standard
only requires std::vector<int>::iterator to be an implementation specified random access iterator
type. I can't find anything requiring it to be a user-defined type at all (and in fact there are reasonable implementation
where it isn't), let alone a user defined type defined in a specific namespace.

Since the defining namespace of container iterators is visible to users, should the standard say anything about what
that namespace is?

In 30.8.2.4 [stringbuf.virtuals]/1, basic_stringbuf::underflow() is specified to unconditionally
return traits::eof() when a read position is not available.

The semantics of basic_stringbuf require, and existing libraries implement it so that this function makes
a read position available if possible to do so, e.g. if some characters were inserted into the stream since the
last call to overflow(), resulting in pptr() > egptr(). Compare to the conceptually similar
D.7.1.3 [depr.strstreambuf.virtuals]/15.

-1- Returns: If the input sequence has a read position available or the function makes a read position available
(as described below), returns traits::to_int_type(*gptr()). Otherwise, returns traits::eof(). Any
character in the underlying buffer which has been initialized is considered to be part of the input sequence.

-?- The function can make a read position available only if (mode & ios_base::in) != 0 and if the write
next pointer pptr() is not null and is greater than the current read end pointer egptr(). To make a read
position available, the function alters the read end pointer egptr() to equal pptr().

During the acceptance of N3471 and
some similar constexpr papers, specific wording was added to pair, tuple, and other templates
that were intended to impose implementation constraints that ensure that the observable constexpr "character"
of a defaulted function template is solely determined by the required expressions of the user-provided types when instantiated,
for example:

The defaulted move and copy constructor, respectively, of pair shall be a constexpr function if and only if
all required element-wise initializations for copy and move, respectively, would satisfy the requirements for
a constexpr function.

This wording doesn't require enough, especially since the core language via CWG 1358 does now support constexpr
function template instantiations, even if such function cannot appear in a constant expression (as specified in 8.6 [expr.const])
or as a constant initializer of that object (as specified in [basic.start.init]). The wording should be
improved and should require valid uses in constant expressions and as constant initializers instead.

-2- The defaulted move and copy constructor, respectively, of pair shall be a constexpr function if and only if
all required element-wise initializations for copy and move, respectively, would satisfy the requirements for
a constexpr functionAn invocation of the move or copy constructor of pair shall be a constant expression
(8.6 [expr.const]) if all required element-wise initializations would be constant expressions. An invocation of the
move or copy constructor of pair shall be a constant initializer for that pair object ( [basic.start.init])
if all required element-wise initializations would be constant initializers for the respective subobjects.

-2- The defaulted move and copy constructor, respectively, of tuple shall be a constexpr function if
and only if all required element-wise initializations for copy and move, respectively, would satisfy the requirements for
a constexpr function. The defaulted move and copy constructor of tuple<> shall be constexpr
functionsAn invocation of the move or copy constructor of tuple shall be a constant expression (8.6 [expr.const])
if all required element-wise initializations would be constant expressions. An invocation of the move or copy constructor of
tuple shall be a constant initializer for that tuple object ( [basic.start.init]) if all
required element-wise initializations would be constant initializers for the respective subobjects. An invocation of the
move or copy constructor of tuple<> shall be a constant expression, or a constant initializer for that
tuple<> object, respectively, if the function argument would be constant expression.

-7- Remarks:The defaulted copy constructor of duration shall be a constexpr function if and only if
the required initialization of the member rep_ for copy and move, respectively, would satisfy the
requirements for a constexpr function.An invocation of the copy constructor of duration shall
be a constant expression (8.6 [expr.const]) if the required initialization of the member rep_ would be a constant expression.
An invocation of the copy constructor of duration shall be a constant initializer for that duration object
( [basic.start.init]) if the required initialization of the member rep_ would be constant initializers
for this subobject.

2290(i). Top-level "SFINAE"-based constraints should get a separate definition in Clause 17

The current library specification uses at several places wording that is intended to refer to
core language template deduction failure at the top-level of expressions (aka "SFINAE"), for example:

The expression declval<T>() = declval<U>() is well-formed when treated as an unevaluated operand (Clause 5).
Access checking is performed as if in a context unrelated to T and U. Only the validity of the immediate context
of the assignment expression is considered. [Note: The compilation of the expression can result in side effects
such as the instantiation of class template specializations and function template specializations, the generation of
implicitly-defined functions, and so on. Such side effects are not in the "immediate context" and can result in the program
being ill-formed. — end note]

Similar wording can be found in the specification of result_of, is_constructible, and is_convertible,
being added to resolve an NB comment by LWG 1390 and 1391 through
N3142.

This wording is necessary to limit speculative compilations needed to implement these traits, but it is also lengthy and repetitive.

[2014-05-19, Daniel suggests a descriptive term]

constrictedly well-formed expression:

An expression e depending on a set of types A1, ..., An which is well-formed when treated as
an unevaluated operand (Clause 5). Access checking is performed as if in a context unrelated to A1, ...,
An. Only the validity of the immediate context of e is considered. [Note: The compilation of
the expression can result in side effects such as the instantiation of class template specializations and function
template specializations, the generation of implicitly-defined functions, and so on. Such side effects are not in the
"immediate context" and can result in the program being ill-formed. — end note]

[2014-05-20, Richard and Jonathan suggest better terms]

Richard suggested "locally well-formed"

Jonathan suggested "contextually well-formed" and then "The expression ... is valid in a contrived argument
deduction context"

[2014-06-07, Daniel comments and improves wording]

The 2014-05-19 suggestion did only apply to expressions, but there are two important examples that are not expressions, but instead
are involving an object definition (std::is_constructible) and a function definition
(std::is_convertible), respectively, instead. Therefore I suggest to rephrase the usage of "expression" into "program
construct" in the definition of Jonathan's suggestion of "valid in a contrived argument deduction context".

I would like to point out that given the new definition of "valid in a contrived argument deduction context", there are several other
places of the Library specification that could take advantage of this wording to improve the existing specification, such as
23.14.13.2 [func.wrap.func] p2, most functions in 23.10.9.2 [allocator.traits.members], and the **Insertable,
EmplaceConstructible, and Erasable definitions in 26.2.1 [container.requirements.general], but given that
these are not fully described in terms of the aforementioned wording yet, I would recommend to fix them by a separate issue
once the committee has agreed on following the suggestion presented by this issue.

[2015-05-05 Lenexa: Move to Open]

...

MC: I think we like the direction but it isn't quite right: it needs some work

JW: I'm prepared to volunteer to move that further, hopefully with the help of Daniel

Roger Orr: should this be Core wording because it doesn't really have anything to do with libraries - the term could then just be used here

AM: Core has nothing to deal with that, though

HT: it seems there is nothing to imply that allows dropping out with an error - maybe that's a separate issue

MC: I'm not getting what you are getting at: could you write an issue? - any objection to move to Open?

A program construct c depending on a set of types A1, ..., An, and treated as
an unevaluated operand (Clause 5) when c is an expression, which is well-formed.
Access checking is performed as if in a context unrelated to A1, ..., An.
Only the validity of the immediate context (17.9.2 [temp.deduct]) of c is considered.
[Note: The compilation of c can result in side effects such as the instantiation of class template
specializations and function template specializations, the generation of implicitly-defined functions, and so on.
Such side effects are not in the "immediate context" and can result in the program being ill-formed. —
end note].

Change Table 49 ("Type property predicates") as indicated:

Table 49 — Type property predicates

Template

Condition

Preconditions

…

template <class T, class U>
struct is_assignable;

The expression
declval<T>() =
declval<U>() is valid in a
contrived argument deduction context
([defns.valid.contr.context]) for typesT and U.well-formed when treated
as an unevaluated operand
(Clause 5). Access
checking is performed as if
in a context unrelated to T
and U. Only the validity of
the immediate context of
the assignment expression
is considered. [Note: The
compilation of the
expression can result in
side effects such as the
instantiation of class
template specializations
and function template
specializations, the
generation of
implicitly-defined
functions, and so on. Such
side effects are not in the
"immediate context" and
can result in the program
being ill-formed. — end
note]

the predicate condition for a template specialization is_constructible<T, Args...> shall be satisfied
if and only if the following variable definition would be well-formed for some invented
variable twould be valid in a contrived argument deduction context ([defns.valid.contr.context]) for
types T and Args...:

T t(create<Args>()...);

[Note: These tokens are never interpreted as a function declaration. — end note] Access checking is
performed as if in a context unrelated to T and any of the Args. Only the validity of the immediate context
of the variable initialization is considered. [Note: The evaluation of the initialization can result in side
effects such as the instantiation of class template specializations and function template specializations, the
generation of implicitly-defined functions, and so on. Such side effects are not in the "immediate context"
and can result in the program being ill-formed. — end note]

If the expressionINVOKE(declval<Fn>(),
declval<ArgTypes>()...) isvalid in a contrived argument deduction
context ([defns.valid.contr.context]) for typesFn and ArgTypes...well
formed when treated as an
unevaluated operand (Clause 5), the
member typedef type shall name the
typedecltype(INVOKE(declval<Fn>(),
declval<ArgTypes>()...));
otherwise, there shall be no member
type. Access checking is performed as
if in a context unrelated to Fn andArgTypes. Only the validity of the
immediate context of the expression is
considered. [Note: The compilation of
the expression can result in side
effects such as the instantiation of
class template specializations and
function template specializations, the
generation of implicitly-defined
functions, and so on. Such side effects
are not in the "immediate context"
and can result in the program being
ill-formed. — end note]

the predicate condition for a template specialization is_convertible<From, To> shall be satisfied if and
only if the return expression in the following code would be well-formedvalid in a contrived argument
deduction context ([defns.valid.contr.context]) for types To and From, including any implicit conversions
to the return type of the function:

To test() {
return create<From>();
}

[Note: This requirement gives well defined results for reference types, void types, array types, and
function types. — end note] Access checking is performed as if in a context unrelated to To
and From. Only the validity of the immediate context of the expression of the return-statement (including conversions to
the return type) is considered. [Note: The evaluation of the conversion can result in side effects such as
the instantiation of class template specializations and function template specializations, the generation of
implicitly-defined functions, and so on. Such side effects are not in the "immediate context" and can result
in the program being ill-formed. — end note]

2292(i). Find a better phrasing for "shall not participate in overload resolution"

The C++14 CD has 25 sections including the phrase "X shall not
participate in overload resolution ...". Most of these uses are double
negatives, which are hard to interpret. "shall not ... unless" tends
to be the easiest to read, since the condition is true when the
function is available, but we also have a lot of "if X is not Y, then
Z shall not participate", which actually means "You can call Z if X is
Y." The current wording is also clumsy and long-winded. We should find
a better and more concise phrasing.

As an initial proposal, I'd suggest using "X is enabled if and only if Y" in prose
and adding an "Enabled If: ..." element to 20.4.1.4 [structure.specifications].

Daniel:

I suggest to name this new specification element for 20.4.1.4 [structure.specifications]
as "Template Constraints:" instead, because the mentioned wording form was intentionally provided
starting with LWG 1237 to give implementations more freedom to realize the
concrete constraints. Instead of the original std::enable_if-based specifications
we can use better forms of "SFINAE" constraints today and it eases the path to possible language-based
constraints in the future.

I've tried it on two implementations (MSVC,GCC) and they are inconsistent with each other on this.

Daniel Krügler:

As currently written, the Remarks element applies unconditionally for all cases and thus should
"win". The question arises whether the introduction of this element by LWG 424 had actually intended
to change the previous Note to a Remarks element. In either case the wording should be improved
to clarify this special case.

The library gives explicit permission in 20.5.4.2.1 [namespace.std] p2 that user code may explicitly instantiate
a library template provided that the instantiations depend on at least one user-defined type:

A program may explicitly instantiate a template defined in the standard library only if the declaration
depends on the name of a user-defined type and the instantiation meets the standard library requirements
for the original template.

But it seems that the C++11 library is not specified in a way that guarantees such an instantiation to be well-formed
if the minimum requirements of the library is not satisfied.

For example, in general, the first template parameter of std::vector is not required to be
DefaultConstructible in general, but due to the split of the single C++03 member function
with default argument

void resize(size_type sz, T c = T());

into

void resize(size_type sz);
void resize(size_type sz, const T& c);

the effect is now that for a type ND that is not DefaultConstructible, such as

struct NP {
NP(int);
};

the explicit instantiation of std::vector<ND> is no longer well-formed, because the attempt to
instantiate the single-argument overload of resize cannot not succeed, because this function imposes
the DefaultInsertable requirements and given the default allocator this effectively requires
DefaultConstructible.

But DefaultConstructible is not the only point, what about CopyConstructible versus
MoveConstructible alone? It turns out that currently the second resize overload
would fail during an explicit instantiation for a type like

struct MO {
MO() = default;
MO(MO&&) = default;
};

because it imposes CopyInsertable requirements that end up being equivalent to the CopyConstructible
requirements for the default allocator.

Technically a library can solve these issues: For special member functions by defining them in some base class, for others
by transforming them effectively into a function template due to the great feature of default template arguments for
function templates (At the very moment the validity of the latter approach depends on a resolution of core language issue
CWG 1635, though). E.g. the here mentioned
resize functions of std::vector could be prevented from instantiation by defining them like this
with an implementation:

In this case, these functions could also be defined in a base class, but the latter approach won't work in all cases.

Basically such an implementation is required to constrain all member functions that are not covered by the general
requirements imposed on the actual library template parameters. I tested three different C++11 library implementations
and but none could instantiate for example std::list, std::vector, or std::deque with
value types that are not DefaultConstructible or only MoveConstructible.

This issue is raised to clarify the current situation in regard to the actual requirements imposed on user-provided
types that are used to explicitly instantiate Library-provided templates. For example, the current Container requirements
impose very little requirements on the actual value type and it is unclear to which extend library implementations have
to respect that.

The minimum solution of this issue should be to at least realize that there is no fundamental requirement on
DefaultConstructible for value types of library containers, because we have since C++03 the general
statement of 20.5.3.1 [utility.arg.requirements] ("In general, a default constructor is not required.").
It is unclear whether CopyConstructible should be required for an explicit instantiation request, but
given the careful introduction of move operations in the library it would seem astonishing that a
MoveConstructible type wouldn't suffice for value types of the container types.

In any case I can envision at least two approaches to solve this issue:

As indicated in LWG 2292, those function could get an explicit "Template Constraints:"
element, albeit this promises more than needed to solve this issue.

The library could introduce a completely new element form, such as "Instantiation Constraints:" that
would handle this situation for explicit instantiation situations. This would allow for simpler techniques
to solve the issue when explicit instantiation is required compared to the first bullet, because it would not
(necessarily) guarantee SFINAE-friendly expression-wellformedness, such as inspecting the expression
std::declval<std::vector<ND>&>.resize(0) in an unevaluated context.

It should be noted that the 2013-08-27 comment to LWG 2193 could be resolved by a similar solution
as indicated in this issue here.

Proposed resolution:

2307(i). Should the Standard Library use explicit only when necessary?

Effects: Alters the length of the string designated by *this as follows:

If n <= size(), the function replaces the string designated by *this with a string of length n whose
elements are a copy of the initial elements of the original string designated by *this.

If n > size(), the function replaces the string designated by *this with a string of length n whose
first size() elements are a copy of the original string designated by *this, and whose remaining elements are all
initialized to c.

This wording is a relic of the copy-on-write era. In addition to being extremely confusing, it has undesirable implications.
Saying "replaces the string designated by *this with a string of length n whose elements are a copy" suggests
that the trimming case can reallocate. Reallocation during trimming should be forbidden, like vector.

26.2.1 [container.requirements.general]/10 says that unless otherwise specified, "no swap() function invalidates
any references, pointers, or iterators referring to the elements of the containers being swapped. [Note: The end()
iterator does not refer to any element, so it may be invalidated. — end note]". However, move constructors and move
assignment operators aren't given similar invalidation guarantees. The guarantees need several exceptions, so I do not believe
that blanket language like /11 "Unless otherwise specified (either explicitly or by defining a function in terms of other functions),
invoking a container member function or passing a container as an argument to a library function shall not invalidate iterators to,
or change the values of, objects within that container." is applicable.

[2014-02-13 Issaquah]

General agreeement on intent, several wording nits and additional paragraphs to hit.

STL to provide updated wording. Move to Open.

[2015-02 Cologne]

AM: in the proposed wording, I'd like to mention that the iterators now refer to elements of a different container.
I think we're saying something like this somewhere. JY: There's some wording like that for swap I think. TK: It's also in
list::splice(). DK to JY: 23.2.1p9.

VV: The issue says that STL was going to propose new wording. Has he done that? AM: I believe we're looking at that.
GR: The request touches on multiple paragraphs, and this PR has only one new paragraph, so this looks like it's not up-to-date.
MC: This was last updated a year ago in Issaquah.

no copy constructor or assignment operator of a returned iterator throws an exception.

no move constructor (or move assignment operator when
allocator_traits<allocator_type>::propagate_on_container_move_assignment::value is true) of a container
(except for array) invalidates any references, pointers, or iterators referring to the elements of the source container.
[Note: The end() iterator does not refer to any element, so it may be invalidated. — end note]

no swap() function throws an exception.

no swap() function invalidates any references, pointers, or iterators referring to the elements of the
containers being swapped. [Note: The end() iterator does not refer to any element, so it may be
invalidated. — end note]

The table in 31.5.1 [re.synopt]/1 says that regex_constants::collate "Specifies that character ranges of the form
"[a-b]" shall be locale sensitive.", but 31.13 [re.grammar]/14 says that it affects individual character comparisons
too.

[2012-02-12 Issaquah : recategorize as P3]

Marshall Clow: 28.13/14 only applies to ECMAScript

All: we're unsure

Jonathan Wakely: we should ask John Maddock

Move to P3

[2014-5-14, John Maddock response]

The original intention was the original wording: namely that collate only made character ranges locale sensitive.
To be frank it's a feature that's probably hardly ever used (though I have no real hard data on that), and is a leftover
from early POSIX standards which required locale sensitive collation for character ranges, and then later changed
to implementation defined if I remember correctly (basically nobody implemented locale-dependent collation).

So I guess the question is do we gain anything by requiring all character-comparisons to go through the locale when this bit
is set? Certainly it adds a great deal to the implementation effort (it's not what Boost.Regex has ever done). I guess the
question is are differing code-points that collate identically an important use case? I guess there might be a few Unicode
code points that do that, but I don't know how to go about verifying that.

STL:

If this was unintentional, then 31.5.1 [re.synopt]/1's table should be left alone, while 31.13 [re.grammar]/14
should be changed instead.

Jeffrey Yasskin:

This page
mentions that [V] in Swedish should match "W" in a perfect world.

However, the most recent version of TR18 retracts
both language-specific loose matches and language-specific ranges
because "for most full-featured regular expression engines, it is
quite difficult to match under code point equivalences that are not
1:1" and "tailored ranges can be quite difficult to implement
properly, and can have very unexpected results in practice. For
example, languages may also vary whether they consider lowercase below
uppercase or the reverse. This can have some surprising results: [a-Z]
may not match anything if Z < a in that locale."

IMO, +1 to changing 28.13 instead of 28.5.1. It seems like we'd be on
fairly solid ground if we wanted to remove regex_constants::collate
entirely, in favor of named character classes, but of course that's
not for this issue.

Effects: leaves the atomic object in an uninitialized state. [Note: These semantics ensure compatibility
with C. — end note]

This implementation requirement is OK for POD types, like int, but 32.6 [atomics.types.generic] p1
intentionally allows template arguments of atomic with a non-trivial default constructor ("The type of the template argument
T shall be trivially copyable (3.9)"), so this wording can be read in a way that makes the behaviour of the following code
undefined:

For a user-defined emulation of atomic the expected outcome would be defined and the program would output "42",
but existing implementations differ and the result value is a "random number" for at least one implementation. This seems
very surprising to me.

To realize that seemingly existing requirement, an implementation is either required to violate normal language rules internally
or to perform specific bit-randomization-techniques after the normal default-initialization that called the default constructor
of S.

According to my understanding, the non-normative note in 99 [atomics.types.operations.req] p4 is intended to
refer to types that are valid C-types, but the example type S is not such a type.

To make the mental model of atomic's default constructor more intuitive for user-code, I suggest to clarify the wording
to have the effects of default-initialization instead. The current state seems more like an unintended effect of imprecise
language used here and has some similarities to wording that was incorrectly used to specify atomic_flag initialization
as described by LWG 2159.

[2014-05-17, Daniel comments and provides alternative wording]

The current wording was considered controversial as expressed by reflector discussions. To me, the actual problem is not newly
introduced by that wording, but instead is already present in basically all paragraphs specifying semantics of atomic types,
since the wording never clearly distinguishes the value of the actual atomic type A and the value of the "underlying",
corresponding non-atomic type C. The revised proposed wording attempts to improve the current ambiguity of these two
kinds of values.

Previous resolution from Daniel [SUPERSEDED]:

This wording is relative to N3691.

Modify 99 [atomics.types.operations.req] p4 as indicated: [Editorial note: There is no exposition-only
member in atomic, which makes it a bit hard to specify what actually is initialized, but the usage of the term "value"
seems consistent with similar wording used to specify the effects of the atomic load functions]

A ::A () noexcept = default;

-4- Effects:leaves the atomic object in an uninitialized stateThe value of the atomic object
is default-initialized (11.6 [dcl.init]). [Note: These semantics ensure compatibility
with C. — end note]

[2015-02 Cologne]

Handed over to SG1.

[2017-07 Toronto]

SG1 reviewed the PR below:

Previous resolution [SUPERSEDED]:

This wording is relative to N3936.

Modify 99 [atomics.types.operations.req] p2 as indicated: [Editorial note: This is a near-to editorial
change not directly affecting this issue, but atomic_address does no longer exist and the pointed to definition is
relevant in the context of this issue resolution.]

-2- In the following operation definitions:

an A refers to one of the atomic types.

a C refers to its corresponding non-atomic type. The atomic_address atomic type corresponds to the
void* non-atomic type.

[…]

Modify 99 [atomics.types.operations.req] p4 and the following as indicated: [Editorial note: There
is no exposition-only member in atomic, which makes it a bit hard to specify what actually is initialized, but
the introductory wording of 99 [atomics.types.operations.req] p2 b2 defines: "a C refers to its
corresponding non-atomic type." which helps to specify the semantics in terms of "the C value referred to by the
atomic object"]

A::A() noexcept = default;

-4- Effects:leaves the atomic object in an uninitialized stateDefault-initializes (11.6 [dcl.init])
the C value referred to by the atomic object. [Note: These semantics ensure compatibility with C.
— end note]

constexpr A::A(C desired) noexcept;

-5- Effects:Direct-iInitializes the C value referred to by the atomic object
with the value desired. Initialization is not an atomic operation (1.10). […]

-8- Effects: Atomically sets the bool value pointed to by object or by this to false.
[…]

SG1 also reviewed another PR from Lawrence Crowl. Lawrence's feedback was that turning atomic<T> into a container of T was a mistake, even if we allow the implementation of atomic to contain a T. SG1 agreed with Lawrence, but his PR (http://wiki.edg.com/bin/view/Wg21toronto2017/DefaultInitNonContainer) had massive merge conflicts caused by the adoption of P0558. Billy O'Neal supplied a new PR, which SG1 agreed to and which LWG looked at informally. This change also makes it clearer that initialization of an atomic is not an atomic operation in all forms, changes the C compatibility example to actually be compatible with C, and removes "initialization-compatible" which is not defined anywhere.

SG1 considered moving ATOMIC_VAR_INIT into Annex D, as their understanding at this time is that WG14 is considering removal of that macro. However, consensus was that moving things between clauses would require a paper, and that we should wait to remove that until WG14 actually does so.

-?- Initialization of an atomic object is not an atomic operation (6.8.2 [intro.multithread]). [Note: It is possible to have an access to an atomic object A race with its construction, for example by communicating the address of the just-constructed object A via a memory_order_relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the recieving thread. This results in undefined behavior. — end note]

-1- [Note: Many operations are volatile-qualified. The "volatile as device register" semantics have not changed in the standard. This qualification means that volatility is preserved when applying these operations to volatile objects. It does not mean that operations on non-volatile objects become volatile. — end note]

atomic() noexcept = default;

-2- Effects: Leaves the atomic object in an uninitialized state. [Note: These semantics ensure compatibility with C. — end note]Initializes the atomic object with a default-initialized (11.6 [dcl.init]) value of type T. [Note: The default-initialized value may not be pointer-interconvertible with the atomic object. — end note]

constexpr atomic(T desired) noexcept;

-3- Effects: Initializes the atomic object with the value desired. Initialization is not an atomic operation (6.8.2 [intro.multithread]). [Note: It is possible to have an access to an atomic object A race with its construction, for example by communicating the address of the just-constructed object A to another thread via memory_order_relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread. This results in undefined behavior — end note]

#define ATOMIC_VAR_INIT(value) see below{value}

-4- The macro expands to a token sequence suitable for constant initialization of an atomic variable of static storage duration of a type that is initialization-compatible with value. [Note: This operation may need to initialize locks. — end note] Concurrent access to the variable being initialized, even via an atomic operation, constitutes a data race. [Note: This macro ensures compatibility with C. — end note]
[Example:atomic<int>atomic_int v = ATOMIC_VAR_INIT(5);
— end example]

2335(i). array<array<int, 3>, 4> should be layout-compatible with int[4][3]

In order to replace some uses of C arrays with std::array, we need it
to be possible to cast from a std::array<> to an equivalent C array.
Core wording doesn't appear to be in quite the right state to allow
casting, but if we specify that appropriate types are
layout-compatible, we can at least write:

union {
array<array<array<int, 2>, 3>, 4> arr;
int carr[4][3][2];
};

to view memory as the other type: C++14 CD [class.mem]p18.

I believe it's sufficient to add "array<T, N> shall be
layout-compatible (6.7 [basic.types]) with T[N]." to
26.3.7.1 [array.overview], but we might also need some extension to
12.2 [class.mem] to address the possibility of layout-compatibility
between struct and array types.

I checked that libc++ on MacOS already implements this, although it
would be good for someone else to double-check; I haven't checked any
other standard libraries.

31.7 [re.traits]/7, begins with "if typeid(use_facet<collate<charT> >) == typeid(collate_byname<charT>)",
which appears to be pseudocode with the intention to convey that the collate facet has not been replaced by the user. Cf. the wording in
N1429 "there is no portable way to implement
transform_primary in terms of std::locale, since even if the sort key format returned by
std::collate_byname<>::transform is known and can be converted into a primary sort key, the user can still
install their own custom std::collate implementation into the locale object used, and that can use any sort key
format they see fit.".

Taken literally, 31.7 [re.traits]/7 appears to imply that named locales are required to hold their collate facets with
dynamic type std::collate_byname<charT>, which is in fact true in some implementations (e.g libc++), but not others
(e.g. libstdc++). This does not follow from the description of _byname in 25.3.1.1.2 [locale.facet]/4, which is only
required to provide equivalent semantics, to the named locale's facet, not to actually be one.

[2015-05-06 Lenexa: Move to Open]

MC, RP: Consequence of failing to follow the rule is UB.

MC: Tightening of requirements.

RP: It should be this way, we just didn't impose it before.

MC: Second change is a bug fix, original code didn't work.

TK: Doesn't seem to make things worse.

Bring up in larger group tomorrow.

JW arrives.

JW: libstdc++ violates this due to two std::string ABIs.

JW: This prevents installing a type derived from Facet_byname, constrains the implementor from using a smarter derived class version.

JW: Can't look at facet id to detect replacement, because replacements have the same id.

RP: Can you give it multiple ids through multiple inheritance?

JW: No, the facet mechanism wouldn't like that.

JW: We should also ask Martin Sebor, he's implemented this stuff recently.

MC: Sounds like this resolution doesn't work, need a better solution.

JW: Write in words "if the facet has not been replaced by the user", the implementation knows how to detect that, but not like this.

For some standard facets a standard "..._byname" class, derived from it, implements the virtual function
semantics equivalent toprovided by that facet of the locale constructed by locale(const char*)
with the same name.
Each such facet provides a constructor that takes a const char* argument, which names the locale, and a
refs argument, which is passed to the base class constructor. Each such facet also provides a constructor
that takes a string argument str and a refs argument, which has the same effect as calling the first
constructor with the two arguments str.c_str() and refs. If there is no "..._byname"
version of a facet, the base class implements named locale semantics itself by reference to other facets. For any
locale loc constructed by locale(const char*) and facet Facet that has a corresponding standard
Facet_byname class, typeid(use_facet<Facet>(loc)) == typeid(Facet_byname).

-7- Effects: if typeid(use_facet<collate<charT> >(getloc())) == typeid(collate_byname<charT>)
and the form of the sort key returned by collate_byname<charT>::transform(first, last) is known and
can be converted into a primary sort key then returns that key, otherwise returns an empty string.

2342(i). User conversion to wchar_t const* or to wchar_t not invoked for operator<<

For wide streams argument types wchar_t const* and wchar_t are supported only as template parameters.
User defined conversions are not considered for template parameter matching. Hence inappropriate overloads of
operator<< are selected when an implicit conversion is required for the argument, which is inconsistent
with the behavior for char const* and char, is unexpected, and is a useless result.

-1- Effects: Behaves as a formatted output function (30.7.5.2.1 [ostream.formatted.reqmts]) of out.
Constructs a character sequence seq. If c has type char and the character type of the stream
is not char, then seq consists of out.widen(c); otherwise seq consists of c.
Determines padding for seq as described in 30.7.5.2.1 [ostream.formatted.reqmts]. Inserts seq into
out. Calls osout.width(0).

-4- Effects: Behaves like a formatted inserter (as described in 30.7.5.2.1 [ostream.formatted.reqmts]) of out.
Creates a character sequence seq of n characters starting at s, each widened using out.widen()
(27.5.5.3), where n is the number that would be computed as if by:

traits::length(s) for the following overloads:

where the first argument is of type basic_ostream<charT, traits>&
and the second is of type const charT*,

and also for the overload where the first argument is of type
basic_ostream<char, traits>& and the second is of type const char*,

where the first argument is of type
basic_ostream<wchar_t, traits>& and the second is of type const wchar_t*,

std::char_traits<char>::length(s) for the overload where the first argument is of type
basic_ostream<charT, traits>& and the second is of type const char*,

traits::length(reinterpret_cast<const char*>(s)) for the other two overloads.

The numeric value of char16_t is defined to be Unicode
code point, which is same to the ASCII value and UTF-8 for
7-bit chars. However, char is not guaranteed to have an
encoding which is compatible with ASCII. For example, '1' in EBCDIC is 241.

I found three places in the standard casting narrow char
literals: bitset::bitset, bitset::to_string and quoted.

PJ confirmed this issue and says he has a solution used
in their <filesystem> implementation, and he may want to
propose it to the standard.

The solution in my mind, for now, is to make those default
arguments magical, where the "magic" can be implemented
with a C11 _Generic selection (works in clang):

[Drafting note: This is a sample wording fixing only one case;
I'm just too lazy to copy-paste it before we discussed whether
the solution is worth and sufficient (for example, should the
other `charT`s like `unsigned char` just don't compile without
supplying those arguments? I hope so). — end drafting note]

Now cin.rdstate() is just failbit in libstdc++ (and Dinkumware, by
PJ), but failbit & badbit libc++. Similar difference found in other
places, like eofbit & badbid after std::getline.

PJ and Matt both agree that the intention (of badbit + rethrow) is
"to signify an exception arising in user code, not the iostreams package".

In addition, I found the following words in unformatted input
function's requirements (30.7.4.3 [istream.unformatted]):

If an exception is thrown during input then ios::badbit is turned on
in *this's error state. (Exceptions thrown from basic_ios<>::clear()
are not caught or rethrown.) If (exceptions()&badbit) != 0 then the
exception is rethrown.

The content within the parenthesis is added by LWG defect 61,
and does fix the ambiguity. However, it only fixed the 1 of 4
requirements, and it lost some context (the word "rethrown" is not
seen before this sentence within this section).

[Lenexa 2015-05-07: Marshall to research and report]

Proposed resolution:

This wording is relative to N3797.

[Drafting note: The editor is kindly asked to introduce additional spaces at the following marked occurrences of
operator& — end drafting note]

-1- Each formatted input function begins execution by constructing an object of class sentry with the noskipws
(second) argument false. If the sentry object returns true, when converted to a value of type bool, the
function endeavors to obtain the requested input. If an exception, other than the ones thrown from clear(), if any,
is thrown during input then ios::badbit
is turned on[Footnote 314] in *this's error state. If (exceptions()&badbit) != 0
then the exception is rethrown.
In any case, the formatted input function destroys the sentry object. If no exception has been thrown, it returns *this.

-1- Each formatted output function begins execution by constructing an object of class sentry. If this object
returns true when converted to a value of type bool, the function endeavors to generate the requested
output. If the generation fails, then the formatted output function does setstate(ios_base::failbit),
which might throw an exception. If an exception, other than the ones thrown from clear(), if any, is thrown
during output, then ios::badbit is turned on[Footnote 327]
in *this's error state. If (exceptions()&badbit) != 0 then the exception is rethrown.
Whether or not
an exception is thrown, the sentry object is destroyed before leaving the formatted output function. If no
exception is thrown, the result of the formatted output function is *this.

-1- Each unformatted output function begins execution by constructing an object of class sentry. If this object
returns true, while converting to a value of type bool, the function endeavors to generate the requested
output. If an exception, other than the ones thrown from clear(), if any, is thrown during output,
then ios::badbit is turned on[Footnote 330] in *this's error state.
If (exceptions() & badbit) != 0 then the exception is rethrown. In any case, the unformatted output
function ends by destroying the sentry object, then, if no exception was thrown, returning the value specified
for the unformatted output function.

-1- Each unformatted input function begins execution by constructing an object of class sentry with the default
argument noskipws (second) argument true. If the sentry object returns true, when converted to a value
of type bool, the function endeavors to obtain the requested input. Otherwise, if the sentry constructor exits
by throwing an exception or if the sentry object returns false, when converted to a value of type bool, the
function returns without attempting to obtain any input. In either case the number of extracted characters
is set to 0; unformatted input functions taking a character array of non-zero size as an argument shall also
store a null character (using charT()) in the first location of the array. If an exception, other than the
ones thrown from clear(), if any, is thrown during input
then ios::badbit is turned on[Footnote 317] in *this's error state. (Exceptions thrown from
basic_ios<>::clear() are not caught or rethrown.) If (exceptions()&badbit) != 0
then the exception is rethrown. It also counts the number of characters extracted. If no exception has been thrown it ends
by storing the count in a member object and returning the value specified. In any event the sentry object is destroyed
before leaving the unformatted input function.

2352(i). Is a default-constructed std::seed_seq intended to produce a predictable .generate()?

"T is a class type, but not a union type, with no non-static data members other than bit-fields of length 0,
no virtual member functions, no virtual base classes, and no base class B for which is_empty<B>::value
is false."

This is incorrect: there is no such thing as a non-static data member that is a bit-field of length 0, since bit-fields of
length 0 must be unnamed, and unnamed bit-fields are not members (see 12.2.4 [class.bit] p2).

It also means that classes such as:

struct S {
int : 3;
};

are empty (because they have no non-static data members). There's implementation divergence on the value of
is_empty<S>::value.

I'm not sure what the purpose of is_empty is (or how it could be useful), but if it's desirable for the above type to
not be treated as empty, something like this could work:

"T is a class type, but not a union type, with no non-static data members other than, no unnamed
bit-fields of non-zero length 0, no virtual member functions, no virtual base classes, and no base class
B for which is_empty<B>::value is false."

and if the above type should be treated as empty, then this might be appropriate:

"T is a class type, but not a union type, with no (named) non-static data members other than bit-fields of
length 0, no virtual member functions, no virtual base classes, and no base class B for which
is_empty<B>::value is false."

[2016-08 Chicago]

Walter says: We want is_empty_v<S> to produce false as a result. Therefore, we recommend adoption of the first of the issue's suggestions.

T is a non-union class type with no non-static data members other than, no unnamed bit-fields of non-zero length 0, no virtual member functions, no virtual base classes, and no base class B for which is_empty_v<B> is false.

[2016-10 Telecon]

Should probably point at section 1.8 for some of this. Status back to 'Open'

Proposed resolution:

Modify Table 38 — Type property predicates for is_empty as follows:

Tis a class type, but not a union type,is a non-union class type with no non-static data members other than, no unnamed bit-fields of non-zero length 0, no virtual member functions, no virtual base classes, and no base class B for which is_empty_v<B> is false.

2362(i). unique, associative emplace() should not move/copy the mapped_type constructor
arguments when no insertion happens

Effects: Inserts a value_type object t constructed withstd::forward<Args>(args)... if and only if there is no element in the
container with key equivalent to the key of t. The bool component of
the returned pair is true if and only if the insertion takes place,
and the iterator component of the pair points to the element with key
equivalent to the key of t.

where we'd like to avoid destroying the Foo if the insertion doesn't
take place (if the container already had an element with the specified key).

N3873 includes
a partial solution to this in the form of a new emplace_stable member function, but LEWG's
discussion strongly agreed that we'd rather have emplace() Just Work:

Should map::emplace() be guaranteed not to move/copy its arguments if the insertion doesn't happen?

SF: 8 F: 3 N: 0 A: 0 SA: 0

This poll was marred by the fact that we didn't notice or call out
that emplace() must construct the key before doing the lookup, and it
must not then move the key after it determines whether an insert is
going to happen, and the mapped_type instance must live next to the key.

The very similar issue 2006 was previously marked NAD, with
N3178 as
discussion. However, given LEWG's interest in the alternate behavior,
we should reopen the question in this issue.

We will need a paper that describes how to implement this before we can make more progress.

The class shared_timed_mutex shall satisfy all of the
SharedTimedMutex requirements (30.4.1.4). It shall be a standard layout class (Clause 9).

There's no SharedTimedMutex requirements; this name doesn't appear anywhere else in the standard. (Prior to N3891,
this was SharedMutex, which was equally undefined.)

I assume this concept should be defined somewhere?

Also, n3891 changes 33.4.3.5 [thread.sharedtimedmutex.requirements] from defining "shared mutex type" to defining
"shared timed mutex type", but its paragraph 2 still talks about "shared mutex type". Is that OK? I think you could argue
that it's clear enough what it means, but presumably it should use the term that paragraph 1 defined.

33.4.4.4 [thread.lock.shared] paragraph 1 talks about the "shared mutex requirements", which again is a term that isn't
defined, and presumably means "the requirements on a shared timed mutex type" or similar (maybe if SharedMutex or
SharedTimedMutex were defined it could be reused here).

The presented wording adds to the existing mutex types, timed mutex types, and shared timed mutex types
terms a new set of corresponding MutexType, TimedMutexType, and SharedTimedMutexType requirements.

The reason for the change of requirement names is two-fold: First, the new name better matches the intention to have a concrete
name for the requirements imposed on the corresponding mutex types (This kind of requirement deviate from the more general
Lockable requirements, which are not restricted to a explicitly enumerated set of library types). Second, using
**MutexType over **Mutex provides the additional advantage that it reduces the chances of confusing named
requirements from template parameters named Mutex (such as for unique_lock or shared_lock).

Nonetheless the here presented wording has one unfortunate side-effect: Once applied it would have the effect that types
used to instantiate std::shared_lock cannot be user-defined shared mutex types due to 33.4.4.4 [thread.lock.shared].
The reason is based on the currently lack of an existing SharedLockable requirement set, which would complete the
existing BasicLockable and Lockable requirements (which are "real" requirements). This restriction is not
actually a problem introduced by the provided resolution but instead one that existed before but becomes more obvious now.

[2015-02 Cologne]

Handed over to SG1.

[2015-05 Lenexa, SG1 response]

Thanks to Daniel, and please put it in SG1-OK status. Perhaps open another issue for the remaining problem Daniel points out?

The new wording reflects the addition of the new shared mutex types. The approach used for shared_lock
is similar to the one used for unique_lock: The template argument Mutex has a reduced requirement set that is not
sufficient for all operations. Only those members that require stronger requirements of SharedTimedMutexType
specify that additionally in the Requires element of the corresponding prototype specifications.

The proposed wording could be more general if we would introduce more fundamental requirements set for SharedLockable
and SharedTimedLockable types which could be satisfied by user-provided types as well, because the
SharedMutexType and SharedTimedMutexType requirements are essentially restricted to an enumerated set of
types provided by the Standard Library. But this extension seemed too large for this issue and can be easily fixed later
without any harm.

-1- The mutex types are the standard library types std::mutex, std::recursive_mutex, std::timed_mutex,
std::recursive_timed_mutex, and std::shared_timed_mutex. They shall meet the MutexType
requirements set out in this section. In this description, m denotes an object of a mutex type.

-1- The timed mutex types are the standard library types std::timed_mutex, std::recursive_timed_mutex,
and std::shared_timed_mutex. They shall meet the TimedMutexType requirements set out below.
In this description, m denotes an object of a mutex type, rel_time denotes an object of an instantiation of
duration (20.12.5), and abs_time denotes an object of an instantiation of time_point (20.12.6).

-2- The class recursive_timed_mutex shall satisfy all of the TimedMutexType requirements
(33.4.3.3 [thread.timedmutex.requirements]). It shall be a standard-layout class (Clause 9).

Change 33.4.3.5 [thread.sharedtimedmutex.requirements] as indicated: [Drafting note: The reference to the
timed mutex types requirements has been moved after introducing the new requirement set to ensure that
SharedTimedMutexTyperefineTimedMutexType.]

-1- The standard library type std::shared_timed_mutex is a shared timed mutex type. Shared timed mutex
types shall meet the SharedTimedMutexType requirements of timed mutex types
(33.4.3.3 [thread.timedmutex.requirements]), and additionally shall meet the requirements set out below. In this
description, m denotes an object of a mutex type, rel_type denotes
an object of an instantiation of duration (20.12.5), and abs_time denotes an object of an instantiation of
time_point (20.12.6).

-2- The class shared_timed_mutex shall satisfy all of the SharedTimedMutexType requirements
(33.4.3.5 [thread.sharedtimedmutex.requirements]). It shall be a standard-layout class (Clause 9).

Change 33.4.4.4 [thread.lock.shared] as indicated: [Drafting note: Once
N3995 has been applied, the following
reference should be changed to the new SharedMutexType requirements ([thread.sharedmutex.requirements]) or
even better to some new SharedLockable requirements (to be defined) — end drafting note]

-1- The mutex types are the standard library types std::mutex, std::recursive_mutex, std::timed_mutex,
std::recursive_timed_mutex, std::shared_mutex, and std::shared_timed_mutex. They shall meet the
MutexType requirements set out in this section. In this description, m denotes an object
of a mutex type.

-1- The timed mutex types are the standard library types std::timed_mutex, std::recursive_timed_mutex,
and std::shared_timed_mutex. They shall meet the TimedMutexType requirements set out below.
In this description, m denotes an object of a mutex type, rel_time denotes an object of an instantiation of
duration (20.12.5), and abs_time denotes an object of an instantiation of time_point (20.12.6).

-2- The class recursive_timed_mutex shall satisfy all of the TimedMutexType requirements
(33.4.3.3 [thread.timedmutex.requirements]). It shall be a standard-layout class (Clause 9).

Change 33.4.3.4 [thread.sharedmutex.requirements] as indicated: [Drafting note: The reference to the
mutex types requirements has been moved after introducing the new requirement set to ensure that
SharedMutexTyperefinesMutexType.]

-1- The standard library types std::shared_mutex and std::shared_timed_mutex are shared mutex types.
Shared mutex types shall meet the SharedMutexType requirements of mutex types
(33.4.3.2 [thread.mutex.requirements.mutex]), and additionally shall meet the
requirements set out below. In this description, m denotes an object of a shared mutex type.

-2- The class shared_mutex shall satisfy all of the SharedMutexType requirements
for shared mutexes (33.4.3.4 [thread.sharedmutex.requirements]). It shall be a standard-layout class (Clause 9).

Change 33.4.3.5 [thread.sharedtimedmutex.requirements] as indicated: [Drafting note: The reference to the
timed mutex types requirements has been moved after introducing the new requirement set to ensure that
SharedTimedMutexTyperefinesTimedMutexType and SharedMutexType.]

-1- The standard library type std::shared_timed_mutex is a shared timed mutex type. Shared timed mutex
types shall meet the SharedTimedMutexType requirements of timed mutex types
(33.4.3.3 [thread.timedmutex.requirements]), shared mutex types (33.4.3.4 [thread.sharedmutex.requirements]),
and additionally shall meet the requirements set out below. In this description, m denotes an object of
a shared timed mutex type, rel_type denotes an object of an instantiation of duration (20.12.5), and
abs_time denotes an object of an instantiation of time_point (20.12.6).

-2- The class shared_timed_mutex shall satisfy all of the SharedTimedMutexType requirements
for shared timed mutexes (33.4.3.5 [thread.sharedtimedmutex.requirements]). It shall be a standard-layout
class (Clause 9).

-14- Requires: The supplied Mutex type shall meet the SharedTimedMutexType requirements
(33.4.3.5 [thread.sharedtimedmutex.requirements]).
The calling thread does not own the mutex for any ownership mode.

-17- Requires: The supplied Mutex type shall meet the SharedTimedMutexType requirements
(33.4.3.5 [thread.sharedtimedmutex.requirements]).
The calling thread does not own the mutex for any ownership mode.

it is not clear from the wording of the Standard whether begin.equal(end)
must be true. In at least one implementation it is not (CC: Sun C++ 5.10 SunOS_sparc Patch 128228-25 2013/02/20) and in at least
one implementation it is (gcc version 4.3.2 x86_64-unknown-linux-gnu).

27.6.3 [istreambuf.iterator] says that end is an end-of-stream iterator since it was default
constructed. It also says that an iterator becomes equal to an end-of-stream
iterator when end of stream is reached by sgetc() having returned eof().
99 [istreambuf.iterator::equal] says that equal() returns true iff both iterators are end of stream
or not end of stream. But there seems to be no requirement that equal check for end-of-stream by calling sgetc().

Jiahan Zi at BloombergLP discovered this issue through his code failing to
work correctly. Dietmar Kühl has opined in a private communication that
the iterators should compare equal.

These are all strings that are correctly parsed by std::strtod, but not by the stream extraction operators.
They contain characters that are deemed invalid in stage 2 of parsing.

If we're going to say that we're converting by the rules of strtold, then we should accept all the things that
strtold accepts.

[2016-04, Issues Telecon]

People are much more interested in round-tripping hex floats than handling inf and nan. Priority changed to P2.

Marshall says he'll try to write some wording, noting that this is a very closely specified part of the standard, and has remained unchanged for a long time. Also, there will need to be a sample implementation.

[2016-08, Chicago]

Zhihao provides wording

The src array in Stage 2 does narrowing only. The actual
input validation is delegated to strtold (independent from
the parsing in Stage 3 which is again being delegated
to strtold) by saying:

[...] If it is not discarded, then a check is made to determine
if c is allowed as the next character of an input field of the
conversion specifier returned by Stage 1.

So a conforming C++11 num_get is supposed to magically
accept an hexfloat without an exponent

0x3.AB

because we refers to C99, and the fix to this issue should be
just expanding the src array.

Support for Infs and NaNs are not proposed because of the
complexity of nan(n-chars).

type_info's destructor is depicted as being virtual, which is nearly unobservable to users (since they can't construct
or copy this class, they can't usefully derive from it). However, it's technically observable (via is_polymorphic and
has_virtual_destructor). It also imposes real costs on implementations, requiring them to store one vptr per
type_info object, when RTTI space consumption is a significant concern.

Making this implementation-defined wouldn't affect users (who can observe this only if they're specifically looking for it) and
wouldn't affect implementations who need virtual here, but it would allow other implementations to drop virtual
and improve their RTTI space consumption.

-1- The class type_info describes type information generated by the implementation. Objects of this class
effectively store a pointer to a name for the type, and an encoded value suitable for comparing two types for
equality or collating order. The names, encoding rule, and collating sequence for types are all unspecified
and may differ between programs. Whether ~type_info() is virtual is implementation-defined.

2412(i). promise::set_value() and promise::get_future() should not race

The problem is that both promise::set_value() and
promise::get_future() are non-const member functions which modify the
same object, and we only have wording saying that the set_value() and
wait() calls (i.e. calls setting and reading the shared state) are
synchronized.

The calls don't actually access the same memory locations, so the
standard should allow it. My suggestion is to state that calling
get_future() does not conflict with calling the various functions that
make the shared state ready, but clarify with a note that this does
not imply any synchronization or "happens before", only being free
from data races.

[2015-02 Cologne]

Handed over to SG1.

[2016-10-21, Nico comments]

After creating a promise or packaged task one thread can call get_future()
while another thread can set values/exceptions (either directly or via function call).
This happens very easily.

-?- Synchronization: Calls to this function do not conflict (6.8.2 [intro.multithread])
with calls to set_value, set_exception, set_value_at_thread_exit, or
set_exception_at_thread_exit. [Note: Such calls need not be synchronized, but implementations
must ensure they do not introduce data races. — end note]

-13- Throws: future_error if *this has no shared state or if get_future has already been called on a
promise with the same shared state as *this.

-13- Returns: A future<R> object that shares the same shared state as *this.

-?- Synchronization: Calls to this function do not conflict (6.8.2 [intro.multithread])
with calls to operator() or make_ready_at_thread_exit. [Note: Such calls need not be
synchronized, but implementations must ensure they do not introduce data races. — end note]

-14- Throws: a future_error object if an error occurs.

-15- Error conditions: […]

[2017-02-28, Kona]

SG1 has updated wording for LWG 2412. SG1 voted to move this to Ready status by unanimous consent.

-?- Synchronization: Calls to this function do not introduce data races (6.8.2 [intro.multithread]) with
calls to set_value, set_exception, set_value_at_thread_exit, or set_exception_at_thread_exit.
[Note: Such calls need not synchronize with each other. — end note]

-13- Throws: future_error if *this has no shared state or if get_future has already been called on a
promise with the same shared state as *this.

-13- Returns: A future object that shares the same shared state as *this.

-?- Synchronization: Calls to this function do not introduce data races (6.8.2 [intro.multithread]) with calls
to operator() or make_ready_at_thread_exit. [Note: Such calls need not synchronize with each other.
— end note]

When NDEBUG is defined, assert must expand exactly to the token sequence ((void)0), with no
whitespace (C99 §7.2/1 and also C11 §7.2/1). This is a lost opportunity to pass the condition along to the optimizer.

The user may observe the token sequence using the stringize operator or discriminate it by making a matching #define
directive. There is little chance of practical code doing such things. It's reasonable to allow any expansion that is a void
expression with no side effects or semantic requirements, for example, an extension keyword or an attribute-specifier finagled
into the context.

Conforming optimizations would still be limited to treating the condition as hint, not a requirement. Nonconformance on this
point is quite reasonable though, given user preferences. Anyway, it shouldn't depend on preprocessor quirks.

As for current practice, Darwin OS <assert.h> provides a GCC-style compiler hint __builtin_expect but only in
debug mode. Shouldn't release mode preserve hints?

Daniel:

The corresponding resolution should take care not to conflict with the intention behind LWG 2234.

N3936 20.5.5.8 [reentrancy]/1 talks about "functions", but that doesn't address the scenario of calling different member
functions of a single object. Member functions often have to violate and then re-establish invariants. For example, vectors
often have "holes" during insertion, and element constructors/destructors/etc. shouldn't be allowed to observe the vector
while it's in this invariant-violating state. The [reentrancy] Standardese should be extended to cover member functions,
so that implementers can either say that member function reentrancy is universally prohibited, or selectively allowed for
very specific scenarios.

AJM confirmed with SG1 that they had no special concerns with this issue, and LWG should retain ownership.

AM: this is too overly broad as it also covers calling the exact same member function on a different object
STL: so you insert into a map, and copying the value triggers another insertion into a different map of the same type
GR: reentrancy seems to imply the single-threaded case, but needs to consider the multi-threaded case

Needs more wording.

Move to Open

[2015-07 Telecon Urbana]

Marshall to ping STL for updated wording.

[2016-05 email from STL]

I don't have any better suggestions than my original PR at the moment.

-1- Except where explicitly specified in this standard, it is implementation-defined which functions (including different
member functions called on a single object) in the Standard C++ library may be recursively reentered.

Currently, std::experimental::optional::operator== imposes the EqualityComparable requirement which provides
two guarantees: It ensures that operator!= can rely on the equivalence-relation property and more importantly, that
the BooleanTestable requirements suggested by issue 2114 are automatically implied.

std::experimental::optional::operator< doesn't provide a LessThanComparable requirement, but there was quite
an historic set of changes involved with that family of types: As of
N3527
this operator was defined in terms of operator< of the contained type T and imposed the LessThanComparable
requirement. In the final acceptance step of optional by the committee, the definition was expressed in terms of std::less
and the LessThanComparable requirement had been removed.

The inconsistency between operator== and operator< should be removed. One possible course of action would be
to add the LessThanComparable to std::experimental::optional::operator<. The EqualityComparable requirement
of operator== could also be removed, but in that case both operators would at least need to require the BooleanTestable
requirements (see 2114) for the result type of T's operator== and operator<.

Arguably, corresponding operators for pair and tuple do not impose LessThanComparable (nor
EqualityComparable), albeit the definition of the "derived" relation functions depend on properties ensured by
LessThanComparable. According to the SGI definition, the intention was
to imposed both EqualityComparable and LessThanComparable. If this is not intended, the standard should clarify
this position.

[2015-02 Cologne]

VV, DK, JY discuss why and when LessThanComparable was removed. AM: Move to LEWG. Please tell LWG when you look at it.

This is a stripped down version of a real world case where I
wrap objects in decorators. These decorators contributes some
functions, and forward all the rest of the API to the wrapped
object using perfect forwarding. There can be overloaded names.

Here the inner object provides an

out(const std::tuple<char, char>&) -> void

function, and the wrappers, in addition to perfect forwarding,
provides

out(const char&) -> void

The main function then call out(l) where l is a char lvalue.

With (GCC's) libstdc++ I get the expected result: the char
overload is run. With (clang++'s) libc++ it is the tuple
version which is run.

The specification of std::align does not appear to specify what happens when the value of the size
parameter is 0. (The question of what happens when alignment is 0 is mentioned in another Defect Report, 2377;
it would change the behavior to be undefined rather than potentially implementation-defined.)

The case of size being 0 is interesting because the result is ambiguous. Consider the following code's output:

There are four straightforward answers as to what the behavior of std::align with size 0 should be:

The behavior is undefined because the size is invalid.

The behavior is implementation-defined. This seems to be the status quo, with current implementations using #3.

Act the same as size == 1, except that if size == 1 would fail but would be defined and succeed
if space were exactly 1 larger, the result is a pointer to the byte past the end of the ptr buffer. That is, the
"aligned" version of a 0-byte object can be one past the end of an allocation. Such pointers are, of course, valid when not
dereferenced (and a "0-byte object" shouldn't be), but whether that is desired is not specified in the Standard's definition
of std::align, it appears. The output of the code sample is "1 8" in this case.

Act the same as size == 1; this means that returning "one past the end" is not a possible result. In this case,
the code sample's output is "0 -1".

The two compilers I could get working with std::align, Visual Studio 2013 and Clang 3.4, implement #3. (Change %td to
%Id on Visual Studio 2013 and earlier. 2014 and later will have %td.)

The BaseCharacteristic for is_constructible is defined in terms of the well-formedness
of a declaration for an invented variable. The well-formedness of the described declaration itself may
change for the same set of arguments because of the introduction of default arguments.

In the following program, there appears to be conflicting definitions of a specialization of
std::is_constructible; however, it seems that this situation is caused without a user violation
of the library requirements or the ODR. There is a similar issue with is_convertible, result_of
and others.

These sections define helper functions, some of which apply to initializer_list<T>. And they're
available if you include one of a long list of header files, many of which include <initializer_list>.
But they are not available if you include <initializer_list>. This seems very odd.

When resizing a vector, the accessibility and exception specification of the value type's
constructors determines whether the elements are copied or moved to the new buffer.
However, the copy/move is performed via the allocator's construct member function, which is
assumed, but not required, to call the copy/move constructor and propagate only exceptions
from the value type's copy/move constructor. The issue might also affect other classes.

The current wording in N4296 relevant here is from Table 28 — "Allocator requirements" in
20.5.3.5 [allocator.requirements]:

An allocator may constrain the types on which it can be instantiated and the arguments for which its
construct member may be called. If a type cannot be used with a particular allocator, the allocator class
or the call to construct may fail to instantiate.

I conclude the following from the wording:

The allocator is not required to call the copy constructor if the
arguments (args) is a single (potentially const) lvalue of the value
type. Similarly for a non-const rvalue + move constructor. See also
26.2.1 [container.requirements.general] p15 which seems to try to require
this, but is not sufficient:
That paragraph specifies the semantics of the allocator's operations,
but not which constructors of the value type are used, if any.

The allocator may throw exceptions in addition to the exceptions propagated by
the constructors of the value type; it can also propagate exceptions from constructors
other than a copy/move constructor.

This leads to an issue with the wording of the exception safety guarantees for vector modifiers in
26.3.11.5 [vector.modifiers] p1:

[…]

void push_back(const T& x);
void push_back(T&& x);

Remarks: Causes reallocation if the new size is greater than the old capacity. If no
reallocation happens, all the iterators and references before the insertion point remain valid.
If an exception is thrown other than by the copy constructor, move constructor, assignment
operator, or move assignment operator of T or by any InputIterator operation there are
no effects.
If an exception is thrown while inserting a single element at the end and T
is CopyInsertable or is_nothrow_move_constructible<T>::value
is true, there are no effects. Otherwise, if an exception is thrown by the move constructor of a
non-CopyInsertableT, the effects are unspecified.

The wording leads to the following problem:
Copy and move assignment are invoked directly from vector.
For intermediary objects (see 2164),
vector also directly invokes the copy and move constructor of the value type.
However, construction of the actual element within the buffer is invoked via the allocator abstraction.
As discussed above, the allocator currently is not required to call a copy/move constructor.
If is_nothrow_move_constructible<T>::value is true for some value type T,
but the allocator uses modifying operations for MoveInsertion that do throw,
the implementation is required to ensure that "there are no effects",
even if the source buffer has been modified.

The construct member function template is only required for rebinding,
which can be required e.g. to store additional debug information in
the allocated memory (e.g. VS2013).

Even though the value type has an accessible and noexcept(true) move
constructor, this allocator won't call that constructor for rvalue arguments.
In any case, it does not call a constructor for which vector has formulated its
requirements. An exception thrown by a constructor called by this allocator is not
covered by the specification in 26.3.11.5 [vector.modifiers] and therefore is
guaranteed not to have any effect on the vector object when resizing.

Another problem arises for value types whose constructors are private,
but may be called by the allocator e.g. via friendship.
Those value types are not MoveConstructible
(is_move_constructible is false), yet they can be MoveInsertable.
It is not possible for vector to create intermediary objects (see 2164) of such a type
by directly using the move constructor.
Current implementations of the single-element forms of vector::insert and vector::emplace
do create intermediary objects by directly calling one of the value type's constructors,
probably to allow inserting objects from references that alias other elements of the container.
As far as I can see, Table 100 — "Sequence container requirements" in 26.2.3 [sequence.reqmts]
does not require that the creation of such intermediare objects can be performed
by containers using the value type's constructor directly.
It is unclear to me if the allocator's construct function could be used to create those
intermediary objects, given that they have not been allocated by the allocator.

Two possible solutions:

Add the following requirement to the allocator_traits::construct function:
If the parameter pack args consists of a single parameter of the type
value_type&&,
the function may only propagate exceptions if is_nothrow_move_constructible<value_type>::value
is false.

Instead of testing the value type's constructors via
is_move_constructible, check the value of
noexcept( allocator_traits<Allocator>::construct(alloc, ptr, rval) )
where
alloc is an lvalue of type Allocator,
ptr is an expression of type allocator_traits<Allocator>::pointer
and
rval is a non-const rvalue of type value_type.

A short discussion of the two solutions:

Solution 1 allows keeping is_nothrow_move_constructible<value_type>
as the criterion for vector to decide between copying and moving when resizing.
It restricts what can be done inside the construct member function of allocators,
and requires implementers of allocators to pay attention to the value types used.
One could conceive allocators checking the following with a static_assert:
If the value type is_nothrow_move_constructible,
then the constructor actually called for MoveInsertion within the construct
member function is also declared as noexcept.

Solution 2 requires changing both the implementation of the default
allocator (add a conditional noexcept) and vector (replace
is_move_constructible with an allocator-targeted check).
It does not impose additional restrictions on the allocator (other than
26.2.1 [container.requirements.general] p15),
and works nicely even if the move constructor of a MoveInsertable type is private or deleted
(the allocator might be a friend of the value type).

In both cases, an addition might be required to provide the basic exception safety guarantee.
A short discussion on this topic can be found
in the std-discussion mailing list.
Essentially, if allocator_traits<Allocator>::construct throws an exception,
the object may or may not have been constructed.
Two solutions are mentioned in that discussion:

allocator_traits<Allocator>::construct needs to tell its caller
whether or not the construction was successful, in case of an exception.

If allocator_traits<Allocator>::construct propagates an exception,
it shall either not have constructed an object at the specified location,
or that object shall have been destroyed
(or it shall ensure otherwise that no resources are leaked).

[2015-05-23, Tomasz Kami&nacute;ski comments]

Solution 1 discussed in this issue also breaks support for the polymorphic_allocator proposed in the part
of the Library Fundamentals TS v1, in addition to already mentioned std::scoped_allocator_adapter. Furthermore
there is unknown impact on the other user-defined state-full allocators code written in the C++11.

In addition the library resolution proposed in the LWG issues 2089 and
N4462,
will break the relation between the std::allocator_trait::construct method and
copy/move constructor even for the standard std::allocator. As example please consider following class:

It's unspecified how many times copy_n increments the InputIterator.
uninitialized_copy_n is specified to increment it exactly n times,
which means if an istream_iterator is used then the next character
after those copied is read from the stream and then discarded, losing data.

I believe all three of Dinkumware, libc++ and libstdc++ implement
copy_n with n - 1 increments of the InputIterator, which avoids reading
and discarding a character when used with istream_iterator, but is
inconsistent with uninitialized_copy_n and causes surprising behaviour
with istreambuf_iterator instead, because copy_n(in, 2, copy_n(in, 2,
out)) is not equivalent to copy_n(in, 4, out)

[2016-08 Chicago]

Tues PM: refer to LEWG

Proposed resolution:

2472(i). Heterogeneous comparisons in the standard library can result in ambiguities

The last line here is ill-formed due to ambiguity: it might be rel_ops::operator!=, and it might be the
heterogeneous tuple operator!=. These are not partially ordered, because they have different constraints:
rel_ops requires the types to match, whereas the tuple comparison requires both types to be tuples (but not
to match). The same thing happens for user code that defines its own unconstrained
'template<typename T> operator!=(const T&, const T&)' rather than using rel_ops.

One straightforward fix would be to add a homogeneous overload for each heterogeneous comparison:

How do wstring_convert::from_bytes and wstring_convert::to_bytes use
the cvtstate member?

Is it passed to the codecvt member functions? Is a copy of it passed
to the member functions? "Otherwise it shall be left unchanged"
implies a copy is used, but if that's really what's intended there are
simpler ways to say so.

Is the same conversion state object used for converting both the get
and put areas? That means a read which runs out of bytes halfway
through a multibyte character will leave some shift state in cvtstate,
which would then be used by a following write, even though the shift
state of the get area is unrelated to the put area.

If a codecvt conversion returns codecvt_base::error should that be
treated as EOF? An exception? Should all the successfully converted
characters before a conversion error be available to the users of the
wbuffer_convert and/or the internal streambuf, or does a conversion
error lose information?

Proposed resolution:

2481(i). wstring_convert should be more precise regarding "byte-error string" etc.

Paragraph 4 of D.18.1 [depr.conversions.string] introduces byte_err_string
as "a byte string to display on errors". What does display mean? The string is returned
on error, it's not displayed anywhere.

Paragraph 14 says "Otherwise, if the object was constructed with a
byte-error string, the member function shall return the byte-error
string." The term byte-error string is not used anywhere else.

Paragraph 17 talks about storing "default values in byte_err_string".
What default value? Is "Hello, world!" allowed? If it means
default-construction it should say so. If paragraph 14 says it won't
be used what does it matter how it's initialized? The end of the
paragraph refers to storing "byte_err in byte_err_string". This should
be more clearly related to the wording in paragraph 14.

It might help if the constructor (and destructor) was specified before
the other member functions, so it can more formally define the
difference between being "constructed with a byte-error string" and
not.

Comparing the address of unrelated objects is not a constant expression since the result is unspecified, so
it could be expected for [1] to fail and [2] to succeed. However, std::less specialization for pointer
types is well-defined and yields a total order, so it could just as well be expected for [1] to succeed. Finally,
since the implementation of such specializations is not mandated, [2] could fail as well (This could happen, if
an implementation would provide such a specialization and if that would use built-in functions that would not be
allowed in constant expressions, for example). In any case, the standard should be clear so as to avoid
implementation-defined constexpr-ness.

[2017-01-22, Jens provides rationale and proposed wording]

std::less<T*> is required to deliver a total order on pointers.
However, the layout of global objects is typically determined
by the linker, not the compiler, so requiring std::less<T*> to
provide an ordering at compile-time that is consistent with
run-time would need results from linking to feed back to
the compiler, something that C++ has traditionally not required.

-2- For templates less, greater, less_equal, and greater_equal, […],
if the call operator calls a built-in operator comparing pointers, the call operator yields a strict total order
that is consistent among those specializations and is also consistent with the partial order imposed by those
built-in operators. Relational comparisons of pointer values are not required to be usable as constant expressions.

The typical use-case of std::initializer_list<T> is for a pass-by-value parameter of T's constructor.
However, this contravenes 20.5.4.8 [res.on.functions]/2.5 because initializer_list doesn't specifically allow
incomplete types (as do for example std::unique_ptr (23.11.1 [unique.ptr]/5) and
std::enable_shared_from_this (23.11.6 [util.smartptr.enab]/2)).

A resolution would be to copy-paste the relevant text from such a paragraph.

Proposed resolution:

2496(i). Certain hard-to-avoid errors not in the immediate context are not allowed to be triggered by
the evaluation of type traits

In particular, I do not see where the wording allows for the "compilation of the expression"
declval<T>() = declval<U>() to occur as a consequence of instantiating std::is_assignable<T, U>
(where T and U are, respectively, A<int> and int in the example code).

Instantiating A<int> as a result of requiring it to be a complete type does not trigger the instantiation of
B<int>; however, the "compilation of the expression" in question does.

-7- Effects: Behaves like a formatted input member (as described in 30.7.4.2.1 [istream.formatted.reqmts])
of in. After a sentry object is constructed, operator>> extracts characters and
stores them into successive locations of an array whose first element is designated bys. If width()
is greater than zero, n is width()min(size_t(width()), N). Otherwise
n is the number of elements of the largest
array of char_type that can store a terminating charT()N. n is the
maximum number of characters stored.

The class template basic_streambuf<charT, traits> serves as an abstract base class for deriving various
stream buffers whose objects each control two character sequences: […]

The term "abstract base class" is not defined in the standard, but "abstract class" is (13.4 [class.abstract]).

According to the synopsis basic_streambuf has no pure virtual
functions so is not an abstract class and none of libstdc++, libc++, or
dinkumware implement it as an abstract class. I don't believe the wording was
ever intended to require it to be an abstract class, but it could be
read that way.

I suggest the wording be changed to "polymorphic base class" or
something else that can't be seen to imply a normative requirement to
make it an abstract class.

The concurrency libraries specified in clauses 29 and 30 do not adequately specify how they relate to the concurrency model
specified in 6.8.2 [intro.multithread]. In particular:

6.8.2 [intro.multithread] specifies "atomic objects" as having certain properties. I can only assume that instances
of the classes defined in Clause 29 are intended to be "atomic objects" in this sense, but I can't find any wording to
specify that, and it's genuinely unclear whether Clause 30 objects are atomic objects. In fact, on a literal reading the
C++ Standard doesn't appear to provide any portable way to create an atomic object, or even determine whether an
object is an atomic object.

(It's not clear if the term "atomic object" is actually needed, given that atomic objects can have non-atomic operations,
and non-atomic objects can have atomic operations. But even if the term itself goes away, there still needs to be some
indication that Clause 29 objects have the properties currently attributed to atomic objects).

Similarly, 6.8.2 [intro.multithread] uses "atomic operation" as a term of art, but the standard never unambiguously
identifies any operation as an "atomic operation" (although in one case it unambiguously identifies an operation that is
not atomic). It does come close in a few cases, but not close enough:

6.8.2 [intro.multithread]/p7 could be read to imply that "synchronization operations" in Clauses 29 and 30
are also atomic operations. However, that's vague and indirect, and somewhat belied by 33.4.3.2 [thread.mutex.requirements.mutex]/p5,
which specifies that mutex lock and unlock operations "behave as atomic operations", but only "for purposes of determining
the existence of a data race". Furthermore, not a single operation in Clause 29 explicitly identifies itself as a
"synchronization operation".

32.6 [atomics.types.generic]/p4 states in part that "There shall be a specialization atomic<bool>
which provides the general atomic operations as specified in 29.6.1", but read in context, "general atomic operations"
appears to be a loose synonym for "general operations on atomic types" as defined in [atomics.types.operations.general],
rather than a use of "atomic object" as Words of Power. Incidentally, "atomic type" is never satisfactorily defined either
(although the <atomic> synopsis comes close).

21.11 [support.runtime]/p10 specifies exactly which operations are "plain lock-free atomic operations", but
in a standard where an "integral constant expression" isn't necessarily a "constant expression", I do not feel safe assuming
that a "plain lock-free atomic operation" is an "atomic operation".

Hans Boehm tells me the operations with "atomically" in the Effects element are intended to be atomic operations,
but since "atomic operation" is a term of art (e.g. in 6.8.2 [intro.multithread]/p27.4), I think this needs to be
spelled out rather than assumed. Furthermore, this does not help with 32.9 [atomics.fences], or anything in Clause 30.

It should introduce a "synchronizes with" relationship. "Happens before" is too weak, since that may not composes
with sequenced before.

The "shall not introduce a data race" wording is probably not technically correct either. These may race with other
(non-allocation/deallocation) concurrent accesses to the object being allocated or deallocated.

23.13.4 [allocator.adaptor.members]/10 requires that the argument types in the piecewise-construction tuples
all be CopyConstructible. These tuples are typically created by std::forward_as_tuple, such as in
¶13. So they will be a mix of lvalue and rvalue references, the latter of which are not CopyConstructible.

My guess is that CopyConstructible was specified to feed the tuple_cat, before that function could
handle rvalues. Since the argument tuple is already moved in ¶11, the requirement is obsolete. It should either
be changed to MoveConstructible, or perhaps better, convert the whole tuple to references (i.e. form
tuple<Args1&&...>) so nothing needs to be moved. After all, this is a facility for handling non-movable
types.

It appears that the resolution of DR 2203, which added std::move to ¶11, simply omitted the
change to ¶10.

I recently encountered a failure related to questionable use of do_get_year. The platform where the code happened
to work had an implementation which handled certain three-digit "year identifiers" as the number of years since
1900 (this article describes such an implementation).

25.4.5.1.2 [locale.time.get.virtuals] makes it implementation defined whether two-digit years are accepted, etc., but does not
say anything specifically about three-digit years.

The implementation freedom to not report errors in 25.4.5.1 [locale.time.get] paragraph 1 also seems to be too broad.

The allocator-aware container requirements in Table 98 impose no
MoveAssignable requirements on the value_type when
propagate_on_container_move_assignment is true, because typically the
container's storage would be moved by just exchanging some pointers.

However for a basic_string using the small string optimization move
assignment may need to assign individual characters into the small
string buffer, even when the allocator propagates.

The only requirement on the char-like objects stored in a basic_string
are that they are non-array POD types and Destructible, which means
that a POD type with a deleted move assignment operator should be
usable in a basic_string, despite it being impossible to move assign:

The reason seems to be that converting a double x in the range [0, 1) to float may result in 1.0f
if x is close enough to 1. I see two possibilities to fix that:

use internally double (or long double?) and then convert the result at the very end to float.

take only 24 random bits and convert them to a float x in the range [0, 1) and then return -log(1 - x).

I have not checked if std::exponential_distribution<double> has the same problem:
For float on the average 1 out of 224 (~107) draws returns "inf", which is easily confirmed.
For double on the average 1 out of 253 (~1016) draws might return "inf", which I have not tested.

Marshall:
I don't think the problem is in std::exponential_distribution; but rather in generate_canonical.

Here is a link to two sample compilations. The first uses
libstdc++ and constructs in reverse order, and the second uses libc++ and constructs in in-order.

A std::tuple mimics both a struct and type-generic container and should thus follow their standards. Construction is
fundamentally different from a function call, and it has been historically important for a specific order to be guaranteed;
namely: whichever the developer may decide. Mandating construction order will allow developers to reference younger elements
later on in the chain as well, much like a struct allows you to do with its members.

There are implementation issues as well. Reversed lists will require unnecessary overhead for braced-initializer-list initialization.
Since lists are evaluated from left to right, the initializers must be placed onto the stack to respect the construction order.
This issue could be significant for large tuples, deeply nested tuples, or tuples with elements that require
many constructor arguments.

I propose that the std::tuple<A, B, ..., Y, Z>'s constructor implementation be standardized, and made to construct
in the same order as its type list e.g. A{}, B{}, ..., Y{}, Z{}.

Daniel:

When N3140 became accepted, wording had been
added that gives at least an indication of requiring element initialization in the order of the declaration of the template
parameters. This argumentation can be based on 23.5.3.1 [tuple.cnstr] p3 (emphasize mine):

-3- In the constructor descriptions that follow, let i be in the range [0,sizeof...(Types))in order,
Ti be the ith type in Types, and Ui be the
ith type in a template parameter pack named UTypes, where indexing is
zero-based.

But the current wording needs to be improved to make that intention clearer and an issue like this one is necessary to be sure that
the committee is agreeing (or disagreeing) with that intention, especially because N3140 didn't really point out the relevance of the element
construction order in the discussion, and because not all constructors explicitly refer to the ordered sequence of numbers generated
by the variable i (The move constructor does it right, but most other don't do that).

[2017-02-12, Alisdair comments]

Note that this issue should not be extended to cover the assignment operators,
as implementations may want the freedom to re-order member-wise assignment
so that, for example, all potentially-throwing assignments are performed before
non-throwing assignments (as indicated by the noexcept operator).

When a shared-state is released, it may be necessary to execute user defined code for the destructor of a
stored value or exception. It is unclear whether the execution of said destructor constitutes an observable side effect.

While discussing N4445 in Lenexa, Nat Goodspeed pointed out that 33.6.5 [futures.state]/5.1 does not explicitly
mention the destruction of the result, so implementations should be allowed to release (or reuse) a shared state ahead
of time under the "as-if" rule.

The standard should clarify whether the execution of destructors is a visible side effect of releasing a shared state.

promise::set_value_at_thread_exit and promise::set_exception_at_thread_exit operate on a shared state
at thread exit, without making the thread participate in the ownership of such shared state.

Consider the following snippet:

std::promise<int>{}.set_value_at_thread_exit(42);

Arguably, since the promise abandons its shared state without actually making it ready, a broken_promise
error condition should be stored in the shared state. Implementations diverge, they either crash at thread exit by
dereferencing an invalid pointer, or keep the shared state around until thread exit.

-10- Some functions (e.g., promise::set_value_at_thread_exit) delay making the shared state ready
untilschedule the shared state to be made ready when the
calling thread exits. This associates a reference to the shared state with the calling thread. The
destruction of each of that thread's objects with thread storage duration
(6.6.4.2 [basic.stc.thread]) is sequenced before making that shared state ready. When the calling
thread makes the shared state ready, if the thread holds the last reference to the shared state, the shared state
is destroyed. [Note: This means that the shared state may not become ready until after the asynchronous
provider has been destroyed. — end note]

2533(i). [concurr.ts] Constrain threads where future::then can run a continuation

In N4538, the continuation given to
future::then can be run "on an unspecified thread of execution". This is too broad, as it allows the
continuation to be run on the main thread, a UI thread, or any other thread. In comparison, functions given to
async run "as if in a new thread of execution", while the Parallelism TS gives less guarantees by running
"in either the invoking thread or in a thread implicitly created by the library to support parallel algorithm execution".
The threads on which the continuation given to future::then can run should be similarly constrained.

[2017-03-01, Kona, SG1]

Agreement that this is a problem. Suggested addition to the issue is below. We have no immediate delivery vehicle
for a fix at the moment, but we would like to make the intended direction clear.

There is SG1 consensus that .then continuations should, by default, and in the absence of executors, be run
only in the following ways:

If the future is not ready when .then() is called, the .then argument may be run on the execution
agent that fulfills the promise.

In all cases, the .then argument may be run on an implementation-provided thread, i.e. a thread that is
neither the main thread nor explicitly created by the user.

In the absence of an executor argument (which currently cannot be supplied), running of the .then() continuation
will not block the thread calling .then(), even if the future is ready at the time.

Straw polls:

SF | F | N | A | SA

For the default behaviour:

"1. Run on completed task or new execution agent"

0 | 7 | 5 | 1 | 0

"2. Run on completed task or .then caller"

0 | 0 | 5 | 5 | 3

"3. Leave as implementation defined"

1 | 2 | 4 | 3 | 3

"4. Always new execution agent"

2 | 3 | 6 | 2 | 0

The actual conclusion was to allow either (1) or (4) for now, since they are quite close, but present a very different
programming mode from (2).

basic_regex member functions shall not call any locale dependent C or C++ API, including the formatted
string input functions. Instead they shall call the appropriate traits member function to achieve the required effect.

Yet, the required interface for a regular expression traits class (31.3 [re.req]) does not appear to have
any reliable method for determining whether a character as encoded for the locale associated with the traits
instance is the same as a character represented by a UnicodeEscapeSequence, e.g., assuming a sane
ru_RU.koi8r locale:

A number of places in the library, including 23.14.7 [comparisons]/14, the Optional container requirements in
26.2.1 [container.requirements.general], and 33.3.2.1 [thread.thread.id]/8, use the phrase "total order".
Unfortunately, that phrase is ambiguous. In mathematics, the most common definition is that a relation ≤ is
a total order if it's total, transitive, and antisymmetric in the sense that x≤y ∧ y≤x ⇒ x=y.
What we really want is a strict total order: a relation < is a strict total order if it's total, transitive, and
antisymmetric in the sense that exactly one of x<y, y<x, and x=y holds.

The non-normative note in 28.7 [alg.sorting]/4 correctly uses the phrase "strict total ordering" rather than
simply "total ordering".

We could address this issue by replacing "total order" with "strict total order" everywhere it appears, since I
think there are no cases where we actually want a non-strict total order, or we could add something in Clause 17 saying
that we always mean strict total order whenever we say total order.

Again, the unqualified lookup for swap finds the member swap instead of the result of a normal argument-depending
lookup, making this ill-formed.

A second example of such a problem recently entered the arena with the addition of the propagate_const template
with another member swap (99 [fund.ts.v2::propagate_const.modifiers]):

constexpr void swap(propagate_const& pt) noexcept(see below);

-2- The constant-expression in the exception-specification is noexcept(swap(t_, pt.t_)).

A working approach is presented in
N4511. By adding a new
trait to the standard library and referencing this by the library fundamentals (A similar approach had been applied in the
file system specification
where the quoted manipulator from C++14 had been referred to, albeit the file system specification is generally based on the
C++11 standard), optional's member swap exception specification could be rephrased as follows:

The combination of 20.5.5.5 [member.functions], paragraphs 2 and 3 that LWG
2259 does seems to drop a requirement that any call behaves as if
no overloads were added. Paragraph 3 used to say
"A call to a member function signature described in the C ++ standard
library behaves as if the implementation
declares no additional member function signatures."
whereas the new wording says
"provided that any call to the member function that would select an
overload from the set of declarations described in this standard
behaves as if that overload were selected."

This can be read as meaning that if there's no default constructor
specified, like for instance for std::ostream, an implementation is free to
add it. It can also be read as meaning that an implementation is free to
add any overloads that wouldn't change the overload resolution result
of any call expression that would select a specified overload. That's
vastly different from allowing extensions that add new functions rather
than new overloads.

This example is accepted by libstdc++, msvc rejects it, and clang+libc++
segfault on melpon.org/wandbox o_O. An earlier clang+libc++ just accepts
it. I don't think the implementation divergence is caused by the acceptance
of the referred-to 2259, but it certainly seems to increasingly bless
the implementation divergence.

must type-erase and store the provided allocator, since the operator= specification requires using the "allocator
specified in the construction of" the std::experimental::function object. This may require a dynamic allocation
and so cannot be noexcept. Similarly, the following constructors

cannot satisfy the C++14 requirement that they "shall not throw exceptions if [the function object to be stored]
is a callable object passed via reference_wrapper or a function pointer" if they need to type-erase and store the
allocator.

Insert the following paragraphs after 99 [fund.ts.v2::func.wrap.func.con]/1:

[Drafting note: This just reproduces the wording from C++14 with the "shall not throw exceptions for
reference_wrapper/function pointer" provision deleted. — end drafting note]

-1- When a function constructor that takes a first argument of type allocator_arg_t is invoked,
the second argument is treated as a type-erased allocator (8.3). If the constructor moves or makes a copy
of a function object (C++14 §20.9), including an instance of the experimental::function class template,
then that move or copy is performed by using-allocator construction with allocator get_memory_resource().

-?- Throws: May throw bad_alloc or any exception thrown by the copy constructor of the stored callable object.
[Note: Implementations are encouraged to avoid the use of dynamically allocated memory for small callable objects,
for example, where f's target is an object holding only a pointer or reference to an object and a member function pointer.
— end note]

-?- Otherwise, *this targets a copy of f initialized with std::move(f). [Note:
Implementations are encouraged to avoid the use of dynamically allocated memory for small callable objects, for example,
where f's target is an object holding only a pointer or reference to an object and a member function pointer. —
end note]

-?- Throws: May throw bad_alloc or any exception thrown by F's copy or move constructor.

-2- In the following descriptions, let ALLOCATOR_OF(f) be the allocator specified in the construction
of function f, or allocator<char>() if no allocator was specified.

[…]

2592(i). Require that chrono::duration_casts from smaller durations to larger durations do not overflow

However, a duration_cast<minutes>(seconds::max()) would cause overflow if the underlying signed integers
only met the minimums specified.

The standard should specify that implementations guarantee that a duration_cast from any smaller duration in
these "convenience typedefs" will not overflow any larger duration. That is, hours should be able to hold
the maximum of minutes, which should be able to hold the maximum of seconds and so on.

More formally, if the ratio typedef A and typedef B is 1:Y where Y > 1 (e.g.,
1 : 60 in case of minutes : seconds), then #bitsA-1 must be at least
ceil(log2(2#bitsB-1)/Y)).

These bits were chosen to satisfy the above formula. Note that
minimums only increased, so larger ranges could be held. A nice
outcome of this choice is that minutes does not go above 32 bits.

[2016-04-23, Tim Song comments]

The P/R of LWG 2592 doesn't fix the issue it wants to solve, because the actual underlying type will likely
have more bits than the specified minimum.

Consider seconds, which the P/R requires to have at least 37 bits. On a typical system this implies
using a 64-bit integer. To ensure that casting from seconds::max() to minutes doesn't overflow
in such a system, it is necessary for the latter to have at least 59 bits (which means, in practice, 64 bits too),
not just 32 bits. Thus, just changing the minimum number of bits will not be able to provide the desired guarantee
that casting from a smaller unit to a larger one never overflow.

If such a guarantee is to be provided, it needs to be spelled out directly. Note that the difference here is 9 bits
(for the 1000-fold case) and 5 bits (for the 60-fold case), which is less than the size difference between integer
types on common systems, so such a requirement would effectively require those convenience typedefs to use the
same underlying integer type.

Effects: Constructs a shared_ptr object that owns the object p and the deleter d.

Please note that it says "owns the object". This was intentionally
changed from "the pointer" as a part of resolution for LWG defect 758,
to cover nullptr_t case.

Since shared_ptr(nullptr, d) owns an object of type nullptr_t, but does
not own a pointer, it is said as "empty" by a strict reading of the
above mentioned definition in 23.11.3 [util.smartptr.shared] p1.

It could be less confusing if shared_ptr(nullptr, d) could be defined to
be empty. But it seems too late to change that (which means changing
whether the deleter is called or not, see
this Stackoverflow article).
Then I'm proposing just fix the contradiction.

-1- The shared_ptr class template stores a pointer, usually obtained via new. shared_ptr
implements semantics of shared ownership; the last remaining owner of the pointer is responsible for destroying the
object, or otherwise releasing the resources associated with the stored pointer. A shared_ptr object is
empty if it does not own an objecta pointer.

Issue 386 changed the return type of reverse_iterator::operator[] to unspecified. However,
as of N3066, the return type of a random access iterator's operator[] shall be convertible to reference;
thus the return type of reverse_iterator::operator[] should be reference (and it is in all common
implementations).

DR 41, "Ios_base needs clear(), exceptions()" stopped short of providing the interface
suggested in its title, but it did require the underlying state to be stored in ios_base. Because rdstate()
is also missing, ios_base manipulators relying on iword and pword cannot detect failure.
The only safe alternative is to manipulate a derived class, which must be a template.

libc++ already provides the interface as a nonconforming extension. libstdc++ implements the internal state but leaves
it frustratingly inaccessible, as specified. Any conforming implementation should be able to provide the interface
without ABI problems.

filesystem::copy doesn't create a symlink to a directory in this case:

copy("/", "root", copy_options::create_symlinks);

If the first path is a file then a symlink is created, but I think my
implementation is correct to do nothing for a directory. We get to
bullet 30.11.14.3 [fs.op.copy] (3.6) where is_directory(f) is true, but options
== create_symlinks, so we go to the next bullet (3.7) which says
"Otherwise, no effects."

I think the case above should either create a symlink, or should
report an error. GNU cp -s gives an error in this case, printing
"omitting directory '/'". An error seems reasonable, you can use
create_symlink to create a symlink to a directory.

In N4169
the author dropped the invoke<R> support by claiming
that it's an unnecessary cruft in TR1, obsoleted by C++11
type inference. But now we have some new business went
to *INVOKE*(f, t1, t2, ..., tN, R), that is to discard the
return type when R is void. This form is very useful, or
possible even more useful than the basic form when
implementing a call wrapper. Also note that the optional
R support is already in std::is_callable and
std::is_nothrow_callable.

[2016-07-31, Tomasz Kami&nacute;ski comments]

The lack of invoke<R> was basically a result of the concurrent publication of the never revision
of the paper and additional special semantics of INVOKE(f, args..., void).

In contrast to existing std::invoke function, the proposed invoke<R> version is not
SFINAE friendly, as elimination of the standard version of invoke is guaranteed by std::result_of_t
in the result type that is missing for proposed invoke<R> version. To provide this guarantee,
following remarks shall be added to the specification:

The usage of is_callable_v<F(Args...), R> causes problem in situation when either F or Args
is an abstract type and the function type F(Args...) cannot be formed or when one of the args is cv-qualified,
as top-level cv-qualification for function parameters is dropped by language rules. It should use
is_callable_v<F&&(Args&&...), R> instead.

"the number of characters generated for the specified format" (excluding fill padding) includes exactly
one character for money_base::space (if present), and

all characters corresponding to money_base::space (excluding fill padding) are copies of fill.

In particular, there is implementation divergence over point (b) as to whether U+0020 or fill should be used.
Further, should a character other than fill be used, it is unclear when "the fill characters are
placed where none or space appears in the formatting pattern", whether the fill characters are placed
at the beginning or the end of the "space field".

I believe that a strict interpretation of the current wording supports U+0020; however, fill is more likely
to be the pragmatic choice.

-2- Where none or space appears, white space is permitted in the format, except where none
appears at the end, in which case no white space is permitted. For input, the value space indicates that
at least one space is required at that position. For output, the value space indicates one instance of the
fill character (25.4.6.2.2 [locale.money.put.virtuals]).The value space indicates that at least one
space is required at that position. Where symbol appears, the sequence of characters returned by
curr_symbol() is permitted, and can be required. Where sign appears, the first (if any) of the
sequence of characters returned by positive_sign() or negative_sign() (respectively as the monetary
value is non-negative or negative) is required. Any remaining characters of the sign sequence are required after all
other format components. Where value appears, the absolute numeric monetary value is required.

2693(i). constexpr for various std::complex arithmetic and value operators

This modification will allow complex-number arithmetic to be performed at compile time. From a mathematical
standpoint, it is natural (and desirable) to treat complex numbers on the same footing as the reals.
From a programming perspective, this change will broaden the scope in which std::complex can be used,
allowing it to be smoothly incorporated into classes exploiting constexpr.

Suggested resolution:

The following functions in the std::complex namespace should be made constexpr:

any call to the member function that would select an overload from the set of declarations described in this
standard behaves as if that overload were selected

is unclear in the extent of the "as if". For example, in providing:

basic_string(const charT* s);

for a one-argument call to:

basic_string(const charT* s, const Allocator& a = Allocator());

it can be read that an implementation may be required to call the copy constructor for the allocator since
the core language rules for copy elision would not allow the "a" argument to be constructed directly into
the member used to store the allocator.

appear to implicitly require rhs be valid (e.g., by referring to its shared state, and by requiring a
valid() == true postcondition). However, they are also marked noexcept, suggesting that they
are wide-contract, and also makes the usual suggested handling for invalid futures, throwing a
future_error, impossible.

Either the noexcept should be removed, or the behavior with an invalid future should be specified.

Alternative #2: Specify that an empty (shared_)future object is constructed if rhs is invalid, and adjust
the postcondition accordingly.

Edit 99 [concurr.ts::futures.unique_future] as indicated:

future(future<future<R>>&& rhs) noexcept;

-3- Effects: If rhs.valid() == false, constructs an empty future object that does not
refer to a shared state. Otherwise, cConstructs a future object from the shared state
referred to by rhs. The future becomes ready when one of the following occurs:

Both the rhs and rhs.get() are ready. The value or the exception from
rhs.get() is stored in the future's shared state.

rhs is ready but rhs.get() is invalid. An exception of type
std::future_error, with an error condition of std::future_errc::broken_promise
is stored in the future's shared state.

-4- Postconditions:

valid() == truevalid() returns the same value as rhs.valid() prior to
the constructor invocation..

rhs.valid() == false.

Edit 99 [concurr.ts::futures.shared_future] as indicated:

shared_future(future<shared_future<R>>&& rhs) noexcept;

-3- Effects: If rhs.valid() == false, constructs an empty shared_future object that does not
refer to a shared state. Otherwise, cConstructs a shared_future object from the shared state
referred to by rhs. The shared_future becomes ready when one of the following occurs:

Both the rhs and rhs.get() are ready. The value or the exception from
rhs.get() is stored in the shared_future's shared state.

rhs is ready but rhs.get() is invalid. The shared_future
stores an exception of type std::future_error, with an error condition of
std::future_errc::broken_promise.

-4- Postconditions:

valid() == truevalid() returns the same value as rhs.valid() prior to
the constructor invocation..

It unconditionally does clear() and then insert(begin(), n, t).
I looked into my local "%PROGRAMFILES(X86)%/Microsoft Visual Studio 14.0/VC/include/vector".

One drawback of libstdc++ implementation, I could find so far, is
possibly increased peek memory usage (both old and new buffer exist at
the same time). But, because the same can happen on the most other
modifications, it seems a reasonable trade-off to remove the
precondition to fill the subtle gap. Users who really needs less memory
usage can do clear() and insert() by themselves.

I also found that basic_string::assign(n, c) is safe on this point.
At 24.3.2.6.3 [string.assign] p17:

basic_string& assign(size_type n, charT c);

Effects: Equivalent to assign(basic_string(n, c)).

Returns: *this.

This can be seen as another gap.

Looking back on the history, I found that the definition of assign(n, t)
was changed at C++14 for library issue 2209. There were more restricting
definitions like this:

void assign(size_type n, const T& t);

Effects:

erase(begin(), end());
insert(begin(), n, t);

I think the precondition was probably set to accept this old definition
and is not required inherently. And if the less memory usage was really
intended, the standard is now underspecifying about that.

[Drafting note: The following changes the specification of recursion_pending() seemingly recursive.
Perhaps it would be easier to specify recursion_pending() in terms of a exposition only member in
recursive_directory_iterator.]

bool recursion_pending() const;

[…]

-24- Returns: true if disable_recursion_pending() has not been called subsequent to the
prior construction or increment operation, otherwise falsefalse if
disable_recursion_pending() has been called subsequent to the prior construction or increment operation,
otherwise the value of recursion_pending() set by that operation.

-10- Postconditions:options(), depth(), and
recursion_pending()recurse_ have the values that rhs.options(),
rhs.depth(), and rhs.recursion_pending()rhs.recurse_, respectively,
had before the function call.

-15- Postconditions:options(), depth(), and
recursion_pending()recurse_ have the values that rhs.options(),
rhs.depth(), and rhs.recursion_pending()rhs.recurse_, respectively,
had before the function call.

[…]

bool recursion_pending() const;

-21- Returns: true if disable_recursion_pending() has not been called subsequent to the
prior construction or increment operation, otherwise falserecurse_.

During the LWG discussion of this issue it has been observed, that the interpretation of the embedded see below
is not really clear and that we should split declaration and definition of the new overloads, so that we have a place
that allows us to specify what "see below" stands for. In addition, the new wording wraps the "see below"
as "size_type(see below)" to clarify the provided expression type, similar as we did for the default
constructor of unordered_map.

If the character extracted is equal to is.widen('('), extracts an object u of type T from is,
then extracts a character from is.

If this character is equal to is.widen(')'), then assigns complex<T>(u) to x.

Otherwise, if this character is equal to is.widen(','), extracts an object v of type T
from is, then extracts a character from is.
If this character is equal to is.widen(')'), then assigns complex<T>(u, v) to x;
otherwise returns the character to is and the extraction fails.

Otherwise, returns the character to is and the extraction fails.

Otherwise, returns the character to is, extracts an object u of type T from is, and
assigns complex<T>(u) to x.

In the description above, characters are extracted from is as if by operator>>
(30.7.4.2.3 [istream.extractors]), and returned to the stream as if by basic_istream::putback
(30.7.4.3 [istream.unformatted]). Character equality is determined using traits::eq.
An object t of type T is extracted from is as if by is >> t.

If any extraction operation fails, no further operation is performed and the whole extraction fails.

-?- Effects: Let PEEK(is) be a formatted input function (30.7.4.2.1 [istream.formatted.reqmts]) of
is that returns the next character that
would be extracted from is by operator>>. [Note: The sentry object is constructed
and destroyed,
but the returned character is not extracted from the stream. — end note]

If PEEK(is) is not equal to is.widen('('), extracts an object u of type T
from is, and assigns complex<T>(u) to x.

Otherwise, extracts that character from is, then extracts an object u of type T from is, then:

If PEEK(is) is equal to is.widen(')'), then extracts that character from is and
assigns complex<T>(u) to x.

Otherwise, if it is equal to is.widen(','), then extracts that character from is and then extracts
an object v of type T from is, then:

If PEEK(is) is equal to is.widen(')'), then extracts that character from is and
assigns complex<T>(u, v) to x.

Otherwise, the extraction fails.

Otherwise, the extraction fails.

In the description above, characters are extracted from is as if by operator>> (30.7.4.2.3 [istream.extractors]), character equality is determined using traits::eq, and an object t of type T is extracted from is
as if by is >> t.

If any extraction operation fails, no further operation is performed and the whole extraction fails.

I used a sorting library which used numeric_limits<T>::max() as a sentinel value

GCC's libstdc++ provides a numeric_limits specialisation for that type, but

Clang's libc++ does not.

This broke the sorting for me on different platforms, and it was quite difficult to determine why. If the default
numeric_limits didn't default to 0s and false values (18.3.2.4 of N4582), and instead
static_asserted, causing my code to not compile, I would have found the solution immediately.

I know that __uint128_t is non-standard, so neither GCC nor Clang is doing the wrong thing nor the right thing
here. I could just submit a patch to libc++ providing the specialisations, but it doesn't fix the problem at its core.

I am wondering, what is the rationale behind the defaults being 0 and false? It seems like it is
inviting a problem for any future numeric types, whether part of a library, compiler extension, and possibly even
future updates to C++'s numeric types. I think it would be much better to prevent code that tries to use
unspecified numeric_limits from compiling.

An alternative to this suggestion would be to still define the primary template, but not provide any of the members
except is_specialized. Either way, this would make numeric_limits members SFINAEable.

Along the same lines, one might wonder why the members that only make sense for floating-point types are required to
be defined to nonsense values for integer types.

[2016-11-12, Issaquah]

Sat PM: This looks like a good idea. Jonathan and Marshall will do post C++17 implementations and report back.

It should be considered whether the description of the
single-object allocation functions should say "or smaller", like
the array allocation functions. For example, according to 21.6.2.1 [new.delete.single] p1 (emphasis mine):

The allocation function (3.7.4.1) called by a new-expression (5.3.4) to allocate size bytes of
storage suitably aligned to represent any object of that size.

The allocation function (3.7.4.1) called by the array form of a new-expression (5.3.4) to allocate
size bytes of storage suitably aligned to represent any array object of that size or smaller.
(footnote: It is not the direct responsibility of operator new[](std::size_t) or operator delete[](void*)
to note the repetition count or element size of the array. Those operations are performed elsewhere in the array
new and delete expressions. The array new expression, may, however, increase the size
argument to operator new[](std::size_t) to obtain space to store supplemental information.)

This seems to date from the days of adaptable function objects with an argument_type typedef, but in
modern C++ the predicate might not have an argument type. It could have a function template that accepts various
arguments, so it doesn't make sense to state requirements in terms of a type that isn't well defined.

[2016-07, Toronto Saturday afternoon issues processing]

The proposed resolution needs to be updated because the underlying wording has changed.
Also, since the sequence is homogeneous, we shouldn't have to say that the expression is well-formed
for all elements in the range; that implies that it need not be well-formed if the range is empty.

-12- Requires:InputIterator's value type shall be CopyAssignable, and shall be writable
(27.2.1 [iterator.requirements.general]) to the out_true and out_falseOutputIterators, and shall be convertible to Predicate's argument typethe
expression pred(*i) shall be well-formed for all i in [first, last).
The input range shall not overlap with either of the output ranges.

-16- Requires:ForwardIterator's value type shall be convertible to Predicate's argument
typeThe expression pred(*i) shall be well-formed for all i in [first, last).
[first, last) shall be partitioned by pred, i.e. all elements that satisfy pred shall appear
before those that do not.

The private members of node_handle are missing the usual "exposition only" comment. As a consequence,
ptr_ and alloc_ now appear to be names defined by the library (so programs defining these names
as macros before including a library header have undefined behavior).

Presumably this is unintentional and these members should be considered to be for exposition only.

It's also not clear whether the name node_handle is reserved for library usage or not;
26.2.4.1 [container.node.overview]/3 says the implementation need not provide a type with this name, but
doesn't seem to rule out the possibility that an implementation will choose to do so regardless.

Daniel:

A similar problem seems to exist for the exposition-only type call_wrapper from
p0358r1, which exposes a private data member named fd and
a typedef FD.

[2016-07 Chicago]

Jonathan says that we need to make clear that the name node_handle is not reserved

Proposed resolution:

2746(i). Inconsistency between requirements for emplace between optional and variant

Why the inconsistency? Should all the cases have a SFINAE requirement?

I see that variant has an additional requirement (T occurs exactly once in Types...), but that
only agues that it must be a SFINAE condition — doesn't say that the other cases (any/variant) should not.

map/multimap/unordered_map/unordered_multimap have SFINAE'd versions of
emplace that don't take initializer_lists, but they don't have any emplace versions
that take ILs.

The C++14 standard contains no language that guarantees the deleter run by a
shared_ptr will see all associated weak_ptr instances as expired. For example,
the standard doesn't appear to guarantee that the assertion in the following
snippet won't fire:

It seems clear that the intent is that associated weak_ptrs are expired,
because otherwise shared_ptr deleters could resurrect a reference to an object
that is being deleted.

Suggested fix: 23.11.3.2 [util.smartptr.shared.dest] should specify that the decrease in
use_count() caused by the destructor is sequenced before the call to the
deleter or the call to delete p.

[2016-11-08, Jonathan and STL suggest NAD]

STL and Jonathan feel that the example has unspecified behaviour, and the
assertion is allowed to fire, and that's OK (the program's expectation
is not reasonable). Otherwise it's necessary to move-construct a copy
of the deleter and use that copy to destroy the owned pointer. We do
not want to be required to do that.

I'd like to withdraw my NAD suggestion. The value of use_count() is already observable during the destructor via
shared_ptr and weak_ptr objects that share ownership, so specifying when it changes ensures correct
behaviour.

Lvalues of type non_swappable are not swappable, as defined by 20.5.3.2 [swappable.requirements],
overload resolution selects the deleted function. Consistently, is_swappable_v<non_swappable> yields
false. It should be noted that since non_swappable is move constructible and move assignable, a qualified
call to std::swap would be well-formed, even under P0185. Now consider the following snippet:

Before P0185, this snippet would violate the implicit requirement of specialized swap for tuples that each tuple
element be swappable. After P0185, this specialized swap overload for tuples would be SFINAEd away, resulting
in overload resolution selecting the base swap overload, and performing the exchange via move construction and
move assignment of tuples.

This issue affects all of pair, tuple, unique_ptr, array, queue,
priority_queue, stack, and should eventually also apply to optional and variant.

-1- Remarks: This function shall not participate in overload resolutionbe defined as deleted
unless is_swappable_v<Ti> is true for all i, where
0 <= i and i < sizeof...(Types). The expression inside noexcept
is equivalent to:

-2- Remarks: This function shall not participate in overload resolutionbe defined as deleted
unless is_move_constructible_v<Ti> && is_swappable_v<Ti>
is true for all i. The expression inside noexcept is equivalent to noexcept(v.swap(w)).

I think there's a minor defect in the std::function interface. The constructor template is:

template <class F> function(F f);

while the assignment operator template is

template <class F> function& operator=(F&& f);

The latter came about as a result of LWG 1288, but that one was dealing with a specific issue that
wouldn't have affected the constructor. I think the constructor should also take f by forwarding reference,
this saves a move in the lvalue/xvalue cases and is also just generally more consistent. Should just make sure
that it's stored as std::decay_t<F> instead of F.

Is there any reason to favor a by-value constructor over a forwarding-reference constructor?

The removal of the "debug only" restriction for use_count() and unique() in shared_ptr
by LWG 2434 introduced a bug. In order for unique() to produce a useful and reliable value,
it needs a synchronize clause to ensure that prior accesses through another reference are visible to the successful
caller of unique(). Many current implementations use a relaxed load, and do not provide this guarantee,
since it's not stated in the standard. For debug/hint usage that was OK. Without it the specification is unclear
and probably misleading.

I would vote for making unique() use memory_order_acquire, and specifying that reference count
decrement operations synchronize with unique(). That still doesn't give us sequential consistency by default,
like we're supposed to have. But the violations seem sufficiently obscure that I think it's OK. All uses that
anybody should care about will work correctly, and the bad uses are clearly bad. I agree with Peter that this
version of unique() may be quite useful.

I would prefer to specify use_count() as only providing an unreliable hint of the actual count (another way
of saying debug only). Or deprecate it, as JF suggested. We can't make use_count() reliable without adding
substantially more fencing. We really don't want someone waiting for use_count() == 2 to determine that
another thread got that far. And unfortunately, I don't think we currently say anything to make it clear that's a
mistake.

This would imply that use_count() normally uses memory_order_relaxed, and unique is
neither specified nor implemented in terms of use_count().

LWG 2939 has been created to signal that some of our current type trait constraints are
not quite correct and I recommend not to enforce the required diagnostics for traits that
are sensitive to mismatches of the current approximate rules.

[2017-03-03, Kona Friday morning]

Unanimous consent to adopt this for C++17, but due to a misunderstanding, it wasn't on the ballot

Setting status to 'Ready' so we'll get it in immediately post-C++17

[2017-06-15 request from Daniel]

I don't believe that this should be "Ready"; I added the extra note to LWG 2797 *and* added the new issue 2939 exactly to *prevent* 2797 being accepted for C++17

alignof(T).Requires:alignof(T) shall be a valid expression (5.3.6),
otherwise the program is ill-formed

Change the specification for is_base_of, is_convertible, is_callable, and
is_nothrow_callable in Table 40 in 23.15.6 [meta.rel]:

Table 40 — Type relationship predicates

[…]

template <class Base, class
Derived>
struct is_base_of;

[…]

If Base and Derived are
non-union class types and are
different types (ignoring possiblecv-qualifiers) then Derived shall
be a complete type, otherwise the program is ill-formed.
[Note: Base classes that
are private, protected, or ambiguous are,
nonetheless, base classes. — end note]

template <class From, class To>
struct is_convertible;

see below

From and To shall be complete
types, arrays of unknown bound,
or (possibly cv-qualified) void
types, otherwise the program is
ill-formed.

swap is a critical function in the standard library, and
should be declared constexpr to support more
widespread support for constexpr in libraries. This
was proposed in P0202R1 which was reviewed
favourably at Oulu, but the widespread changes to
the <algorithm> header were too risky and unproven
for C++17. We should not lose constexpr support for
the much simpler (and more important) <utility>
functions because they were attached to a larger
paper. Similarly, the fundamental value wrappers, pair and tuple,
should have constexpr swap functions,
and the same should be considered for optional and
variant. It is not possible to mark swap for std::array
as constexpr without adopting the rest of the
P0202R1 though, or rewriting the specification
for array swap to not use swap_ranges.

Suggested resolution:

Adopt the changes to the <utility> header proposed in
P0202R1, i.e., only bullets C, D, and E.
In addition, mark the swap functions of pair and
tuple as constexpr, and consider doing the same for
optional and variant.

[Issues Telecon 16-Dec-2016]

Priority 3

[2017-11 Albuquerque Wednesday issue processing]

Status to Open; we don't want to do this yet; gated on Core issue 1581. See also 2897.

[2017-11 Albuquerque Thursday]

It looks like 1581 is going to be resolved this week, so we should revisit soon.

The requirements on the stateT type used
to instantiate class template fpos are not
clear, and the following Table 108 — "Position
type requirements" is a bit of a mess. This is
old wording, and should be cleaned up with better
terminology from the Clause 17 Requirements. For example,
stateT might be require CopyConstructible?,
CopyAssignable?, and Destructible. Several
entries in the final column of the table appear to be
post-conditions, but without the post markup to
clarify they are not assertions or preconditions. They
frequently refer to identifiers that do not apply to all
entries in their corresponding Expression
column, leaving some expressions without a clearly defined semantic.
If stateT is a trivial type, is fpos also a
trivial type, or is a default constructor not required/supported?

Throughout optional/variant/any's specification references are made to "the selected constructor
of T". For example, 23.6.3.1 [optional.ctor]/16 says of the constructor from const T&:

-16- Remarks: If T's selected constructor is a constexpr constructor, this constructor shall be a
constexpr constructor.

Similarly, the in-place constructor has this wording (23.6.3.1 [optional.ctor]/25-26):

-25- Throws: Any exception thrown by the selected constructor of T.

-26- Remarks: If T's constructor selected for the initialization is a constexpr constructor,
this constructor shall be a constexpr constructor.

If T is a scalar type, it has no constructor at all. Moreover, even for
class types, the in-place constructor wording ignores any implicit conversion done on the argument before the selected
constructor is called, which 1) may not be valid in constant expressions and 2) may throw an exception; such exceptions
aren't thrown "by the selected constructor of T" but outside it.

The wording should probably be recast to refer to the entire initialization.

If a std::function has a reference as a return type, and that reference binds to a prvalue
returned by the callable that it wraps, then the reference is always dangling. Because any use of such
a reference results in undefined behaviour, the std::function should not be allowed to be
initialized with such a callable. Instead, the program should be ill-formed.

A minimal example of well-formed code under the current standard that exhibits this issue:

A fix has been implemented. Conversions that may be conversion operators are allowed, though, because those can
produce legitimate glvalues. Before adopting this, it need to be considered considered whether there should be
SFINAE or a hard error.

-6- Returns:For all i, 0 ≤ i < N, aAn
array<remove_cv_t<T>, N>such that each element is copy-initialized
with the corresponding element of ainitialized with { a[i]... } for
the first form, or { std::move(a[i])... } for the second form..

While SG1 was processing NB comments CA1 and LATE2 regarding P0270R1,
we decided to remove the proposed guarantee that quick_exit be made signal safe.

Our reasoning is that functions registered with at_quick_exit aren't forbidden from calling
quick_exit, but quick_exit implementations likely acquire some form of a lock before
processing all registered functions (because a note forbids the implementation from introducing data races).

The same applies if a function registered in at_quick_exit handles a signal, and that signal calls
quick_exit. SG1 believes that both issues (same thread deadlock, and signal deadlock) can be resolved
in the same manner. Either:

Specify that calling quick_exit while servicing quick_exit is undefined; or

Specifying that calling quick_exit while servicing quick_exit is defined to not deadlock,
and instead calls _Exit without calling further registered functions.

Option 2. seems preferable, and can be implemented along the lines of:

Remarks: The function quick_exit() is signal-safe (21.11.3 [csignal.syn]). [Note: It might
still be unsafe to call quick_exit() from a handler, because the functions registered with at_quick_exit()
might not be signal-safe. — end note]

-13- Effects: Functions registered by calls to at_quick_exit are called in the reverse order of their
registration, except that a function shall be called after any previously registered functions that had
already been called at the time it was registered. Objects shall not be destroyed as a result of calling
quick_exit. If control leaves a registered function called by quick_exit because the function does not
provide a handler for a thrown exception, std::terminate() shall be called. [Note:at_quick_exit
may call a registered function from a different thread than the one that registered it, so registered
functions should not rely on the identity of objects with thread storage duration. — end note] After
calling registered functions, quick_exit shall call _Exit(status). [Note: The standard file
buffers are not flushed. See: ISO C 7.22.4.5. — end note]

-?- Remarks: The function quick_exit() is signal-safe (21.11.3 [csignal.syn]). [Note:
It might still be unsafe to call quick_exit() from a handler, because the functions registered with
at_quick_exit() might not be signal-safe. — end note]

resize_file has this postcondition (after resolving late comment 42, see P0489R0):

Postcondition:file_size(p) == new_size.

This is impossible for an implementation to satisfy, due to the possibility of file system races.
This is not actually a postcondition; rather, it is an effect that need no longer hold when the function returns.

[Drafting note: I considered a slightly more verbose form: "Causes the
size in bytes of the file p resolves to, as determined by file_size
(30.11.14.14 [fs.op.file_size]), to be equal to new_size, as if by POSIX
truncate." but I don't think it's an improvement. The intent of the
proposed wording is that if either file_size(p) or truncate(p.c_str())
would fail then an error occurs, but no call to file_size is required,
and file system races might change the size before any such call does occur.]

Whenever a name x defined in the standard library is mentioned, the name x is assumed to be fully
qualified as ::std::x, unless explicitly described otherwise. For example, if the Effects section
for library function F is described as calling library function G, the function ::std::G is meant.

I would like clarification from LWG regarding the various limit macros like INT_8_MIN in <cstdint>,
in pursuit of editorial cleanup of this header's synopsis. I have two questions:

At present, macros like INT_8_MIN that correspond to the optional type int8_t are required
(unconditionally), whereas the underlying type to which they appertain is only optional. Is this deliberate?
Should the macros also be optional?

Is it deliberate that C++ only specifies sized aliases for the sizes 8, 16, 32 and 64, whereas the corresponding
C header allows type aliases and macros for arbitrary sizes for implementations that choose to provide extended integer
types? Is the C++ wording more restrictive by accident?

[2017-01-27 Telecon]

Priority 3

[2017-03-04, Kona]

C11 ties the macro names to the existence of the types. Marshall to research the second question.

This is as close as I can get to the C wording without resolving part (a) of the issue (whether we deliberately don't
allow sized type aliases for sizes other than 8, 16, 32, 64, a departure from C). Once we resolve part (a), we need
to revisit <cinttypes> and fix up the synopsis (perhaps to get rid of N) and add similar
wording as the one below to make the formatting macros for the fixed-width types optional. For historical interest,
this issue is related to LWG 553 and LWG 841.

-2- The header defines all types and macros the same as the C standard library header <stdint.h>.
In particular, for each of the fixed-width types (int8_t, int16_t, int32_t,
int64_t, uint8_t, uint16_t, uint32_t, uint64_t) the type alias and
the corresponding limit macros are defined if and only if the implementation provides the corresponding type.

-2- The header defines all types and macros the same as the C standard library header <stdint.h>.
See also: ISO C 7.20

-?- In particular, all types that use the placeholder N are optional when N is not 8,
16, 32 or 64. The exact-width types intN_t and uintN_t for N = 8, 16, 32, 64
are also optional; however, if an implementation provides integer types with the corresponding width, no padding bits, and
(for the signed types) that have a two's complement representation, it defines the corresponding typedef names. Only
those macros are defined that correspond to typedef names that the implementation actually provides. [Note: The macros
INTN_C and UINTN_C correspond to the typedef names int_leastN_t and
uint_leastN_t, respectively. — end note]

-?- In particular, macros that use the placeholder N are defined if and only if the implementation
actually provides the corresponding typedef name in 21.4.1 [cstdint.syn], and moreover, the fscanf macros
are provided unless the implementation does not have a suitable fscanf length modifier for the type.

should be fine, but isn't guaranteed, since {0} has no type. We should rather go for implicit conversion:

An array is an aggregate (11.6.1 [dcl.init.aggr]) that can be list-initialized with up to N elements
whose types are convertible to Tthat can be implicitly converted to T.

[2016-11-26, Tim Song comments]

This is not possible as written, because due to the brace elision rules for aggregate initialization,
std::array<int, 2> arr{{0}, {1}}; will never work: the {0}
is taken as initializing the inner array, and the {1} causes an error.

but I don't think the paper is up to speed with LWG 2756. There's no reason
to use such an universal reference in the guide and remove_reference in its
target, just guide with T, with the target optional<T>, optional's constructors
do the right thing once the type has been deduced.

In 26.2.7 [unord.req] paragraph 12, it says that the behaviour of operator== is undefined unless the
Hash and Pred function objects respectively have the same behaviour. This makes comparing containers
with randomized hashes with different seeds undefined behaviour, but I think that's a valid use case. It's not much
more difficult to support it when the Hash function objects behave differently. I did a little testing and
both libstdc++ and libc++ appear to support this correctly.

I suggest changing the appropriate sentence in 26.2.7 [unord.req] paragraph 12: "The behavior of a program that
uses operator== or operator!= on unordered containers is undefined unless the Hash and
Pred function objects respectively haveobject has the same behavior for both
containers and the equality comparison operator for Key is a refinement"

-12- Two unordered containers a and b compare equal if a.size() == b.size() and,
for every equivalent-key group [Ea1, Ea2) obtained from a.equal_range(Ea1), there exists
an equivalent-key group [Eb1, Eb2) obtained from b.equal_range(Ea1), such that
is_permutation(Ea1, Ea2, Eb1, Eb2) returns true. For unordered_set and
unordered_map, the complexity of operator== (i.e., the number of calls to the ==
operator of the value_type, to the predicate returned by key_eq(), and to the hasher returned
by hash_function()) is proportional to N in the average case and to N2
in the worst case, where N is a.size(). For unordered_multiset and
unordered_multimap, the complexity of operator== is proportional to
∑ Ei2 in the average case and to N2 in the worst case,
where N is a.size(), and Ei is the size of the ith
equivalent-key group in a. However, if the respective elements of each corresponding pair of equivalent-key
groups Eai and Ebi are arranged in the same order (as is commonly
the case, e.g., if a and b are unmodified copies of the same container), then the average-case
complexity for unordered_multiset and unordered_multimap becomes proportional to N
(but worst-case complexity remains 𝒪(N2), e.g., for a pathologically bad hash function).
The behavior of a program that uses operator== or operator!= on unordered containers is undefined
unless the Hash andPred function objects respectively havehas
the same behavior for both containers and the equality comparison operator for Key is a refinement(footnote 258)
of the partition into equivalent-key groups produced by Pred.

The first row in Table 112 "Position type requirements"
talks about the expression P(i) and then has an assertion
p == P(i). However, there are no constraints on p
other than being of type P, so (on the face of it) this
seems to require that operator== on P always returns
true, which is non-sensical.

[2017-01-27 Telecon]

Priority 3

Proposed resolution:

2833(i). Library needs to specify what it means when it declares a function constexpr

The library has lots of functions declared constexpr, but it's not clear what that means. The constexpr
keyword implies that there needs to be some invocation of the function, for some set of template
arguments and function arguments, that is valid in a constant expression (otherwise the program would be ill-formed,
with no diagnostic required), along with a few side conditions. I suspect the library intends to require something a
lot stronger than that from implementations (something along the lines of "all calls that could reasonably be constant
subexpressions are in fact constant subexpressions, unless otherwise stated").

[variant.ctor]/1 contains this, which should also be fixed:

"This function shall be constexpr if and only if the value-initialization of the alternative type T0
would satisfy the requirements for a constexpr function."

This is the wrong constraint: instead of constraining whether the function is constexpr, we should constrain
whether a call to it is a constant subexpression.

Daniel:

This is has some considerable overlap with LWG 2289 but is phrased in a more general way.

17.6.5.6 constexpr functions and constructors [constexpr.functions]

-1- This International Standard explicitly requires that certain standard library functions are
constexpr (10.1.5 [dcl.constexpr]). If the specification for a templated entity
requires that it shall be a constexpr templated entity, then that
templated entity shall be usable in a constant expression.. An
implementation shall notmay declare
anyadditional standard library function signature as
constexprexcept for those where it is explicitly required.
Within any header that provides any non-defining declarations of
constexpr functions or constructors an implementation shall provide
corresponding definitions.

Currently some overloads of basic_string::find are noexcept and some are not. Historically this
was because some were specified in terms of constructing a temporary basic_string, which could throw. In
practice creating a temporary (and potentially allocating memory) is a silly implementation, and so they could be
noexcept. In the C++17 draft most of them have been changed to create a temporary basic_string_view
instead, which can't throw anyway (P0254R2 made those changes).

This is confusing for users, as they need to carefully check which overload their code will resolve to, and consider
whether that overload can throw. Refactoring code can change whether it calls a throwing or non-throwing overload.
This is an unnecessary burden on users when realistically none of the functions will ever throw.

The find, rfind, find_first_of, find_last_of, find_first_not_of and
find_last_not_of overloads that are defined in terms of basic_string_view should be noexcept,
or "Throws: Nothing." for the ones with narrow contracts (even though those narrow contracts are not enforcable
or testable).

The remaining overloads that are still specified in terms of a temporary string could also be noexcept. They
construct basic_string of length one (which won't throw for an SSO implementation anyway), but can easily be
defined in terms of basic_string_view instead.

There's one basic_string::compare overload that is still defined in terms of a temporary basic_string,
which should be basic_string_view and so can also be noexcept (the other compare overloads can
throw out_of_range).

[2016-12-15, Tim Song comments]

The following overloads invoking basic_string_view<charT>(s, n) are implicitly narrow-contract
(the basic_string_view constructor requires [s, s+n) to be a valid range) and should be
"Throws: Nothing" rather than noexcept:

LWG 2468's resolution added to MoveAssignable the requirement to tolerate self-move-assignment,
but that does nothing for library types that aren't explicitly specified to meet MoveAssignable other than make
those types not meet MoveAssignable any longer.

To realize the intent here, we need to carve out an exception to 20.5.4.9 [res.on.arguments]'s restriction for
move assignment operators and specify that self-move-assignment results in valid but unspecified state unless otherwise
specified. The proposed wording below adds that to 20.5.5.15 [lib.types.movedfrom] since it seems to fit well with the
theme of the current paragraph in that section.

In addition, to address the issue with 26.2.1 [container.requirements.general] noted in LWG
2468's discussion, the requirement tables in that subclause will need to be edited in a way similar to
LWG 2468.

-1- Objects of types defined in the C++ standard library may be moved from (12.8). Move operations may be
explicitly specified or implicitly generated. Unless otherwise specified, such moved-from objects shall be
placed in a valid but unspecified state.

-?- An object of a type defined in the C++ standard library may be move-assigned (15.8.2 [class.copy.assign])
to itself. Such an assignment places the object in a valid but unspecified state unless otherwise specified.

-1- Each of the following applies to all arguments to functions defined in the C++ standard library, unless
explicitly stated otherwise.

(1.1) — […]

(1.2) — […]

(1.3) — If a function argument binds to an rvalue reference parameter, the implementation may
assume that this parameter is a unique reference to this argument. [Note: If the parameter is a generic parameter
of the form T&& and an lvalue of type A is bound, the argument binds to an lvalue reference
(14.8.2.1) and thus is not covered by the previous sentence. — end note] [Note: If a program
casts an lvalue to an xvalue while passing that lvalue to a library function (e.g. by calling the function with the argument
std::move(x)), the program is effectively asking that function to treat that lvalue as a temporary.
The implementation is free to optimize away aliasing checks which might be needed if the argument
was an lvalue. — end note] [Note: This does not apply to the argument passed to a
move assignment operator (20.5.5.15 [lib.types.movedfrom]). — end note]

Requires: If allocator_traits<allocator_type
>::propagate_on_container_move_assignment::value
is false, T is MoveInsertable
into X and MoveAssignable.
All existing elements of a are either
move assigned to or destroyed.
post: If a and rv do not refer
to the same object,a shall be equal
to the value that rv had before this assignment

directory_iterator::increment and recursive_directory_iterator::increment are specified by
reference to the input iterator requirements, which is narrow-contract: it has a precondition that the iterator
is dereferenceable. Yet they are marked as noexcept.

Either the noexcept (one of which was added by LWG 2637) should be removed, or the
behavior of increment when given a nondereferenceable iterator should be specified.

Currently, libstdc++ and MSVC report an error via the error_code argument for a nondereferenceable
directory_iterator, while libc++ and boost::filesystem assert.

[2017-01-27 Telecon]

Priority 2; there are some problems with the wording in alternative B.

[2018-01-24 Tim Song comments]

LWG 3013 will remove this noexcept (for a different reason). The behavior of
calling increment on a nondereferenceble iterator should still be clarified, as I was informed
that LWG wanted it to be well-defined.