Additional checks can slowdown the code. It doesn't matter in some operations, but not on __len__, __contains__, etc. EAFP approach is to catch AttributeError and raise appropriate dbm.dumb.error exception.

Here's the new version which addresses your last comment. Regarding the first issue, I don't believe that the result will be as readable (but I agree with you that it will be better). For instance, `items` will probably look like this:
try:
return [(key, self[key]) for key in self._index.keys()]
except AttributeError:
raise dbm.dumb.error(...) from None
but will look totally different for other __len__:
try:
return len(self._index)
except TypeError:
raise dbm.dumb.error(...) from None.
We could catch TypeError only for dunder methods though and for the rest of the methods check the value of _index before access.

> Regarding the first issue, I don't believe that the result will be as readable (but I agree with you that it will be better).
Yes, perhaps performance of the dbm.dumb module is not worst this cost. Are you have any benchmarking results?
The patch LGTM.
> Should this be targeted for 3.5?
I think yes.

> Now this seems odd, maybe catching + reraising an exception has a greater overhead than a func call and checking if an attribute is None.
Yes, of course, catching + reraising an exception is costly. But when an exception is not raised, this is cheap. Usually len() is called for open database.
> IMHO it is a bug fix, not a new feature, and could be applied in 3.3 and 3.4.
Hmm. If consider dbm.dumb as separated module, this is perhaps a new feature. But if consider it as an implementation of the dbm protocol, it should conform to other dbm modules, and this is a bugfix. What would you say Nick?

I think it's definitely still OK to fix this in 3.4, but I think it's borderline enough to avoid including in the last ever 3.3 release. Changing exception types always introduces a little backwards compatibility risk, so it's something I lean towards only doing in new feature releases, even in cases like this where the old exception was clearly not the best choice.

That's slightly more acceptable, but it's still hard to make the case
that changing exception types is a reasonable thing to do in a
maintenance release (it's only the specific nature of the error that
makes me consider it reasonable even in a feature release).

_check_closed sounds like you expect it to be closed, and might even assert that it is closed, except that you want the check run even in release mode and/or it might fail. Since being closed is actually the unexpectedly broken state, I would prefer that you rename it to something else, like _verify_open.

I think the requested timing regression was for the non-broken case. When the database has NOT been closed, and keys() still works, will it be way slower than before?
Note that I am not asking you to do that test (though the eventual committer might); the implementation of whichdb leaves me believing that almost anyone who is likely to care will have already followed the docunmentation's recommendation to install another *dbm ahead of dumb.

Here's a new patch which uses the EAFP approach for dunder methods (__len__, __contains__ etc) and the _verify_open method for the other methods (.keys, .iterkeys) etc. Now the results are similar with the benchmark without the patch.

There is no need to speed up methods which do IO (__getitem__, __setitem__, __delitem__). However method which works only with an index (keys, iterkeys, __contains__, __len__) can be optimized. In the __contains__ method an exception can be raised not only by nen-existent __contains__ of None, but from __hash__ or __eq__ methods of a key, so we should distinguish these cases.