I think you probably don't want to use quantize here: it makes use of
the current context, which means (for example) that
trunc(Decimal(integer_with_30_digits))
will fail with an InvalidOperation exception. I assume what's wanted is
something that operates purely on Decimals and is unaffected by the
context. Is this true?

Sorry: that was nonsense. trunc is fine---it's round, floor and ceil that fail on large arguments with this patch:
>>> import decimal, math
>>> math.floor(decimal.Decimal("1e30"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dickinsm/python_source/py3k/Lib/decimal.py", line 1475, in __floor__
return trunc(self.quantize(Decimal(1), rounding=ROUND_FLOOR))
File "/Users/dickinsm/python_source/py3k/Lib/decimal.py", line 2265, in quantize
'quantize result has too many digits for current context')
File "/Users/dickinsm/python_source/py3k/Lib/decimal.py", line 3546, in _raise_error
raise error(explanation)
decimal.InvalidOperation: quantize result has too many digits for current context
>>> round(decimal.Decimal("1e30"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dickinsm/python_source/py3k/Lib/decimal.py", line 1487, in __round__
return trunc(self.quantize(Decimal(1), rounding=ROUND_HALF_EVEN))
File "/Users/dickinsm/python_source/py3k/Lib/decimal.py", line 2265, in quantize
'quantize result has too many digits for current context')
File "/Users/dickinsm/python_source/py3k/Lib/decimal.py", line 3546, in _raise_error
raise error(explanation)
decimal.InvalidOperation: quantize result has too many digits for current context
>>>
Can I suggest using _rescale instead of quantize? For example,
def __round__(self, ndigits:_numbers.Integral=None):
"""Rounds self to ndigits decimal places, defaulting to 0.
If ndigits is omitted or None, returns an int, otherwise a
Decimal. Rounds half toward even."""
if ndigits is None:
return trunc(self._rescale(0, rounding=ROUND_HALF_EVEN))
return self._rescale(-ndigits, rounding=ROUND_HALF_EVEN)

Here's a version of the patch that uses _rescale instead of quantize. I
don't know enough about how contexts are used to know whether someone
might want to know that ceil(Decimal("1e30")) gave a result with more
precision than the input. On the other hand, the error .quantize()
throws is clearly the wrong one, so this is at least an improvement.

Cool! Works for me.
I agree that it's not 100% clear that round(large_decimal) should return an integer rather
than raising an exception. But, rightly or wrongly, this is what int(large_decimal) does at
the moment, and it would be surprising to have int and round behave differently in this
respect. The current behaviour also fits with the way that int(large_float) and
round(large_float) behave, with a valid integer result returned even if that integer is
larger than 2**53.
There is of course a problem here that's not present for floats, namely that someone can
write round(Decimal("1e1000000")) and then wonder why his/her computer takes so long to give
an answer. I don't really see any way around this, other than perhaps a note in the docs.
I notice that math.floor(large_float) and math.ceil(large_float) return floats at the
moment. Is this something that would change under PEP 3141? If not, should
floor(large_decimal) and ceil(large_decimal) return Decimal instances instead of integers?
One last thing: would it be worth backporting some of this to Python 2.6, just to avoid
unnecessary divergence of the Decimal code between 2.x and 3.0? I guess the trunc()
function calls would have to be replaced by calls to the __trunc__ method---would this be a
problem?
Mark

I'm +1 to this change.
As this does not negatively affect the behavior on Py2, for each part of
the patch I propose:
- decimal.py: incorporate them to the trunk
- numbers.py: of course, in Py3
- test_decimal.py: incorporate this tests, activating them if in Py3
What do you think?

If there aren't too many differences between the 2.6 and 3.0 version of
decimal.py and your patch, do 2.6 first, then the next time we merge
stuff into 3.0 from the trunk it'll be forward-ported automatically.
Though you'd have to start by backporting *numbers.py first.

Some more comments:
(1) I don't think I understand the mechanism by which __lt__ and __le__ are supposed to
be called. For example, by adding a few print statements it seems that the comparison
Decimal(2) > 3
doesn't call __lt__ at any stage. Is this what's expected, or is this unintended?
(2) Also on the subject of __le__ and __lt__: if Decimal is going to have rich
comparisons (and I guess it is, one way or another :) ) then we might as well get them
right for NaNs: IEEE-754(r) says that any comparison involving NaN should return False,
with the exception of != which should always return True. With 3-way comparison it
wasn't possible to get this right. This would fix Issue 1514428. (Further discussion of
this should probably go there---I don't think it should hold up applying this patch...)
(3) I'm getting a serious performance hit from this patch: a complete run of
test_decimal.py costs me 22.439 seconds (best of 3) with the patch, and 11.604 seconds
without. I'm not sure where this is coming from---perhaps it's unavoidable.

_dec_from_triple: Probably, yes. It doesn't seem to have any practical
effect, but it's good to be consistent.
__lt__, __le__: It doesn't matter whether they're called because they
have the same effect as __cmp__. They're only there to implement
numbers.Real. Perhaps, though, this is a sign that numbers.Real should
require __cmp__ instead in 2.6? Or all of the rich comparisons instead
of just the two minimal ones?
NaNs: I think the guideline is to keep the current behavior for 2.6, and
it's not needed for the 3141 implementation, but if that issue comes to
a conclusion before I get the 3.0 implementation submitted, I'm happy to
follow it.
Performance: cProfile points at abc.__instancecheck__ taking 20% of the
total time (1132436 calls). This could be a problem... I'll check how
that's changed (on my machine) from head and if I can fix it.
Thanks for your detailed reviews, Mark!

Yes, making Decimal subclass object instead of Real and Inexact makes it
as fast as it used to be. ABCMeta.__instancecheck__ is easy to speed up,
but after fixing it, we're still about 25% behind. So here'a version
that just registers Decimal as a subclass instead of having it inherit.
I removed __le__ and __lt__ too, since, without subclassing
numbers.Real, nothing requires them to exist, and the behavior's the same.