I mean, what Python does those 40ms? Compared to Perl's 4ms? Or C's ZILCHms?

Absolute values seem to be small enough to be disregarded. But the same cliche can be seen in bigger applications.

For example Gentoo's tool chain - emerge - many Gentoo users are familiar with. Its DARN slow. Even compared to Debian system - apt-get - which does so much more and often labeled by jealous RedHat folks "amateurish" - works magnitudes faster (and even on junk hardware).

Or SCons. SCons take 1.5 seconds to just start up and print help message. It's Perl's sibling cons/pcons in the time manages to compile several sources.

6 comments:

What you are showing is mostly the time to load the interpreter. For a less biased look at relative benchmarking see The computer language shootout game, and don't forget to look up this page on how to make your language win!

If you looked here you'ld find a mainly Python program that is the fastest in its field.

My reality is that most applications - markedly slow applications - are turning out to be Python applications.

My best guess is that many Python applications rely way too much on bloat - a-la Java - with bells'n'whistles like OOP for simple tasks like HelloWorld.

On other side, practical scripting languages like shell and Perl do not try to sell me a concept or better language. They are providing me with tool to do my work.

Uhm... I guess, on average, to Python programmer concepts and organization are more important than usability. Compared to bash/awk/sed/perl programmer.

And after all, if I'm writing tool for short repetitive task, I really not inclined to wait every time those NNNms Python interpreter takes to warm up. (And that's covers lion share of what I do.)

Somehow bash/awk/sed/perl do not need the warm up and always ready to execute whatever I throw at them.

P.S. I wonder would Python people ever consider to write PythonOS. JavaOS was interesting idea - killed quickly by Sun just in case. But it showed that it's possible to cut interpreter start-up times by simply having it all the time in memory, shared by all running applications. Idea was resurrected by Java servers - but only on server side (not on client one) and only for servlets.

I too am an awk, sed, perl, programmer and Perl does have its place, it's just used too much out of that slot. Personally I find 'speed' to be comparable between Perl and Python,; but Python ... well here is what I think in more detail.

On an OS written in Python: You would need to be able to compile at least a subset of Python into machine code - there are various projects looking at that, including PyPy The OLPC project has a lot of Python, but not for the OS Kernel- Paddy.

Unfortunately you seem to fall into the same mindset as others. No doubt you'll progress to writing larger Perl programs, need some of the features for writing, organising, and maintaining such large programs and either suffer with Perl as it gets further out of its depth, or change your language.

Development, organization & further maintenance have little to do with language itself.

They are all dependent on personal discipline.

In case of Python (or Java) you do not need to have discipline - you are firmly confined into language framework. Step aside - and compiler would complain.

In case of Perl (or C) you are absolutely on your own - it is your discipline as developer would be put to test.

Some people like former. Others like later. It's matter of personal tastes, ability to disciple ourselves and learning capacity. And also it is clear - from practice - that the division has little influence on amount and quality of work being done in the end.

WBR.

P.S. You would also notice same pattern with division IDE v. plain text editor: some people like to have everything prearranged once and learn it once (IDE), while others do not mind constant learning and constant rearranging internals of working environment to fit better to actual work at hand.

P.P.S. BASIC remark. Read statement of the task once again. Most of the time in the task you waste brining and organizing data in memory - while simple FSM (fed in on-line with data) would suffice. No saving of data required at all. If you want to save data - then use RDBMS and SQL. One SQL statement would be able to do the job in fraction of second. Though history would remain silent on how long would it take to INSERT all the data into DB. ;)