Re: [Cython] Code generated for the expression int(x)+1

Ask F. Jakobsen, 01.05.2012 09:53:
> I am having a simple performance problem that can be resolved by splitting up an expression in two lines. I
don't know if it is a bug or I am missing something.
>
> The piece of code below is translated to slow code
>
> 1)
> cdef int i
> i=int(x)+1
What you are saying here is:
Convert x (known to be a C double) to an arbitrary size Python integer
value, add 1, convert it to a C int and assign it to i.
> whereas the code below is translated to fast code
>
> 2)
> cdef int i
> i=int(x)
> i=i+1
This means:
Convert x (known to be a C double) to an arbitrary size Python integer
value, convert that to a C int and assign it to i, then add 1 and assign
the result to i.
In the first case, Cython cannot safely assume that the result of the int()
conversion will fit into a C int and will therefore evaluate the expression
in Python space. Note that the "+1" only hits a specific case where it
looks safe, if you had written "int(x) // 200", this decision would make a
lot more sense, because the intermediate result of int(x) could really be
larger than a C int, even though the result of the division will have to
fit into one (or will be made to fit, because you say so).
In the second case, you explicitly tell Cython that the result of the int()
conversion will fit into a C int and that *you* accept the responsibility
for any overflows, so Cython can safely optimise the Python coercion away
and reduce the int() call to a bare C cast from double to int.
You can get the same result by writing down the cast yourself.
Stefan