Enforce precision in decimal python

In some environments, exact decimals (numerics, numbers...) are defined with

scale

and

precision

, with scale being all significant numbers, and precision being those right of the decimal point. I want to use python's decimal implementation to raise an error, if the precision of the casted string is higher than the one defined by the implementation.

So for example, I have an environment, where

scale = 4

and

precision = 2

.
How can I achieve these commands to raise an error, because their precision exceeds that of the implementation?