Two issues that should be pointed out this this change. The first is
that a "leap second occurs IN this record" is required in either case,
and positive/negative direction is needed. A record that overlaps a
leap second is different from a record where the start time is IN a
leap second. Consider a 1sps record containing 11 samples with a start
time of 23:59:50 on 31 Dec. The starttime is not a leap second in
either case, but the last sample could occur either at 00:00:00 on 1
Jan or at 23:59:60 on 31 Dec. Allowing the seconds field to be 60 is
insufficient in this case and so some extra information beyond the
start time is needed even in the case of the BTime structure.

The second comment is that users do not interact directly with
miniseed currently, and will not do so in the future. There will be
software libraries in each of the popular languages that do this for
them, and those libraries will convert times into the languages
default representation. A BTime might be a little less confusing for
the developer, but the user is going to use a value in software that
looks a lot like the IEEE format when they initiate any processing.
Having the file format doesn't make this problem go away or really
even make it less likely. What would help is if libraries were created
with functions such as getTimeOfSample(27) or
timeDifferenceBetweenSamples(sample1, sample2) so that the library
does the math correctly taking into account the leap seconds.

I was originally somewhat against the IEEE format, but the more I
think about it, the less I think it matters. Math with time is hard
and the file format is not the right place to protect users against
doing wrong subtraction across leap seconds. The software that uses
the file format is the right place for that.

I strongly agree with Philip's two points. In my understanding the two existing leap second flags are no more or less optional than they were before as stated in the Rationale. There is no way to leave these flags out, aka required. Granted they are often not set when they should be, but that is not relevant to start time representation.

As to the argument that making the time representation appear to be continuous would encourage improper usage, bear in mind that checking the 3 leap second bits in the straw man is a much lower burden than the current SEED 2.x time interpretation, which is more likely to go wrong. Currently, to understand time in a record, the developer must check the 2 existing leap second flags (or check with an external reference), check for optional microseconds, check for non-zero time correction and check the flag to see if the time correction has been applied. Yet seismology is not riddled with improper time interpretations, mostly, I suspect, because users use libraries that take care of these details as Philip writes.

Also, the epoch time scale between any two consecutive records in a stream is continuous in the vast, vast, vast majority of existing cases. Unless we start inserting leap seconds every few minutes this will continue to be true for future records. Designing the time stamp for ease of use in what are some of the most common operations with miniSEED, e.g. reconstructing a time series from multiple records, is worth strong consideration. In the case of an epoch time scale the "cost" is the addition of a flag that must be checked to know if the time stamp is a leap second; which is added to the 2 existing leap second flags that should be consulted anyway. With that one additional flag we have a complete time representation that is easy to use for time calculations. On the contrary, the proposed MSEED3 Time Structure requires the developer to either build operators to perform time calculations or convert the representation to something more amenable to such calculations.

Of minor importance is that the microsecond epoch time representation is smaller (8 bytes) than the proposed MSEED3 Time Structure (12 bytes). The later continues to waste a byte for the necessary alignment as the SEED 2.4 BTime structure does.

The assertion that the proposed time representation (I assume this is in reference to the epoch time representation) has no way to represent 2 consecutive leap seconds is true. But I'm confused as to why that is relevant, the time stamp represents an instant, not a range. Also, the proposed MSEED3 Time Structure does not appear to be able to represent consecutive leap seconds. Maybe I've misunderstood the point. I predict there would be larger fish to fry upstream of miniSEED should consecutive leap seconds occur!

Two issues that should be pointed out this this change. The first is
that a "leap second occurs IN this record" is required in either case,
and positive/negative direction is needed. A record that overlaps a
leap second is different from a record where the start time is IN a
leap second. Consider a 1sps record containing 11 samples with a start
time of 23:59:50 on 31 Dec. The starttime is not a leap second in
either case, but the last sample could occur either at 00:00:00 on 1
Jan or at 23:59:60 on 31 Dec. Allowing the seconds field to be 60 is
insufficient in this case and so some extra information beyond the
start time is needed even in the case of the BTime structure.

The second comment is that users do not interact directly with
miniseed currently, and will not do so in the future. There will be
software libraries in each of the popular languages that do this for
them, and those libraries will convert times into the languages
default representation. A BTime might be a little less confusing for
the developer, but the user is going to use a value in software that
looks a lot like the IEEE format when they initiate any processing.
Having the file format doesn't make this problem go away or really
even make it less likely. What would help is if libraries were created
with functions such as getTimeOfSample(27) or
timeDifferenceBetweenSamples(sample1, sample2) so that the library
does the math correctly taking into account the leap seconds.

I was originally somewhat against the IEEE format, but the more I
think about it, the less I think it matters. Math with time is hard
and the file format is not the right place to protect users against
doing wrong subtraction across leap seconds. The software that uses
the file format is the right place for that.

I agree very strongly with Doug Neuhausers comments with regard to use of
64-bit unix time.

In addition to the specific issues Doug has raised with regard to handling
of leap seconds, there are general concerns.

1. Unix time is referential; without a qualified conversion, the time value
per se is useless. The POSIX standard is deliberately non-committal in
defining how it should be interpreted, stating:

It is sufficient to require that applications be allowed to treat this time
as if it represented the number of seconds between the referenced time and
the Epoch. It is the responsibility of the vendor of the system, and the
administrator of the system, to ensure that this value represents the number
of seconds between the referenced time and the Epoch as closely as necessary
for the application being run on that system. (underline mine)

The proposed straw-man approach would presumably attempt to represent UTC
with a (modified) Unix time that, when conversion to a broken-down Gregorian
time/date, the correct UTC date/time is returned. This approach is subject
to errors of interpretation, requiring, in effect, that each time tag must
be reconstructed for valid interpretation.

The approach is convenient for data center management, but forces onto the
user the valid development and interpretation of the time tag and leap
second management.
In contrast, there is no ambiguity in the archive, and the content is
essentially self-contained and self-documenting as to the presence of a leap
second using the broken-down time representation.

2. A monolithic 64-bit tag is opaque to inspection. It is prone to subtle
errors in construction and manipulation, for example, loss of 8 bits or 16
bits, with no apparent symptom.

The use of a 64-bit tag inappropriately confuses processing convenience with
archival integrity.

An archival representation should be as free as possible from ambiguous, or
outright inaccurate (as in the case of leap-second handling using Unix
time), representations.
A broken-down UTC time tag, as in the present format (with additional
microsecond resolution), serves as an identifiable, unambiguous stand-alone
time mark in each archived data record.