python compiles source to byte code. The .pyc file is portable for any computer running same version of python. So there you go, python didn't write instructions for native hardware. python interprets the code.

Python interpreters vary in how they behave. Thus it's possible to write python code that works on some interpreters but not on others. And causes seemingly strange behavior. This caused me some trouble. Not understanding "is", numeric comparisons looked more readable to me using is. Sometimes it worked![code]>>> (100**2) is (100**2)
False
>>> (10**2) is (10**2)
True
>>> [code]Turns out that some python interpreters cache objects for commonly used integers 0 through 100 making the memory address id(100) constant in these implementations. Portable code must not rely on this feature!

If you think about it, python spends a lot of time hashing names. Every time you call a function, python doesn't know that "math" still refers to the "math" module. It's got to hash math in math.cos , and it has to hash cos because you might have replaced the function as well. I suppose additional timing would help verify for various ways to express expressions.

Why does slow python interest scientists?

Developer time is critical, and python has low programming cost. Using the numpy module ( www.enthought.com ) for large arrays the interpreter time becomes negligible, and the array operations take place with direct access in c code. Versus what you might write in a compiled language, numpy might make more copies of an array and get worse cached memory performance, but it's pretty quick.