Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Back in May we discussed using Python, R, and Octave as data analysis tools, and compared the relative strength of each. One point of contention was whether Python could be considered a legitimate tool for such work. Now, Bei Lu writes while Python on its own may be lacking, Python with packages is very much up to the task: "My passion with Python started with its natural language processing capability when paired with the Natural Language Toolkit (NLTK). Considering the growing need for text mining to extract content themes and reader sentiments (just to name a few functions), I believe Python+packages will serve as more mainstream analytical tools beyond the academic arena." She also discusses an emerging set of solutions for R which let it better handle big data.

Sadly I see way too many corporate 'documents' that subscribe to the putative logic 'if it has numbers, it must be a spreadsheet; if it has pictures, it goes into powerpoint'. Where's the 'Ironic' moderation option

I've been seeing this way too often. People don't bother considering what the intent of the person asking was. They fixate on one word with a relatively vague meaning and choose one particular interpretation of it, and then go haring off into oblivion. The discussion turns into a verbal fight over the precise definition of a word that's vague in the first place.

It's really rather annoying.

So: can you replace R with Python (let's say, for a new project), assuming that you know both languages and all the rele

> The question is not whether or not it is possible but whether or not it is realistic and practical.

Using Python for data analysis is realistic, assuming you know Python (or have enough background in computer science to pick it up quickly -- it's not a particularly difficult language, as languages go: I've seen accounting software packages that would be much harder to learn).

Python is perhaps not quite as practical as some other choices. In particular, object-oriented programming is not an especially

The question is not whether or not it is possible but whether or not it is realistic and practical.

Not only is it realistic and practical but it is already in use for data analysis! Everyone on the ATLAS experiment at CERN uses python to some degree in their analysis and my grad students and I use an analysis framework almost entirely in Python with ROOT [root.cern.ch] for I/O.

or use other libraries easily and quickly. PyCUDA gives genuinely huge number crunching power to the language. And allows meta programming which suits scripting languages and machine learning very well. http://mathema.tician.de/dl/pub/nvidia-gtc-2009.mp4 [tician.de]

The readability and flexibility and speed of development are what it brings, the raw power comes from the libraries it can talk to.

What does "legitimate data analysis tool" mean? MatLab was included in the comparison, and MatLab is more of an engineering tool. The built in (excuse me, optional paid for) stats library is pretty limited.

R is great for doing statistical analysis, but it's not great for doing things like image analysis. Without additional libraries R isn't nearly as good as it is with libraries either.

- obviously it means to ask whether Python is legitimate or is bastard, what do you think it means? It is not asking whether Python is a 'data analysis tool', it is asking whether Python is a legitimate something or other.

So to answer the question you have to look at the Python's descendancy. You'll quickly discover that Python was actually conceived in a huge orgy of different programming paradigms, styles and languages, it's even named after a circus!

I believe the answer is that Python is a bastard of data analysis tools, but so what, bastards are people too.

I wrote a general linear model in Python because I was unhappy with the existing ones and I wanted an intimate knowledge of how it worked. I wrote most of a general linear mixed model, but then decided it wasn't worth the time and just used the one in R via RPy2. Then it turned out the one built into R was too slow, so I upgraded to the one in the lme R package. That exists because a lot of smart people use R.

But sure, if your "data analysis" involves multiplication and maybe a t-test or two, it doesn't really matter what you use.

Sage is basically a batteries included Python distribution. Lots of people like a bit package to use like that. I prefer putting the pieces together myself. My other complaint about Sage, last time I looked at it, is that it's more difficult to install your own packages in the Sage environment than it is to do so with stock Python. One of the great things about Python is needing a particular algorithm, typing it into Google, and downloading and installing the handy package that someone else has already

Alright, you're old-fashioned. And you're mixing up apples and oranges.

I think what most people these days are talking about is not just having some kind of online analytics data resource, but having a system where having that resource is taken as a given and the task is to use mathematics and AI to classify records, discover patterns and relationships, locate unusual data (without necessarily specifying the nature of the anomaly in advance), and whatnot.

A spreadsheet is fine for doing simple summaries of small, heterogeneous, tabular datasets (calculating averages and whatnot). But it's not going to help you find one record out of millions where your search criteria are too complex to be expressed in a SQL where clause.

Since people do use python for data analysis (hence the data analysis related packages that are available), of course it's legitimate.

Just like how when you're standing on the roof and you need to pound in a couple nails, that heavy pair of pliers in your pocket is a legitimate tool. It may not be the best tool for the job, the best tool might be a pneumatic nail gun, but if all you have with you and what you know how to use is pliers, then that's the right tool. Why spend time and money learning some other "more appropriate" language (or buying an air compressor and nail gun) when you already have a tool at your fingertips that will do what you need.

As your needs grow you might need to find another more appropriate tool, but if you can get the job done with Python, why bother searching for the "perfect" tool?

Depending on your needs, sh, awk, sed, sort, and uniq may be all the tools you need - many log parsing, analysis and reporting programs have been writing with those tools, often ingesting more rows of data per day than many small business BI systems.

Why spend time and money learning some other "more appropriate" language (or buying an air compressor and nail gun) when you already have a tool at your fingertips that will do what you need.

Indeed, although sometimes you save yourself a lot of headaches by getting a tool that was built for your task. I have, in a pinch, used a screw driver to hammer nails, but a screw driver is no replacement for a hammer.

That being said, Python+SciPy+NumPy is fine for data analysis; people use it all the time, and it works as well as R or MatLab. It is not as though we are talking about QuickBasic for data analysis.

I work in the biosciences and we occasionally have a similiar discussion.

In our context, it isn't about how one analyzes the the data, it is a question about how anyone else can recreate your experiment: that is, set up the experimental system, acquire the data, analyze it which will yield approximately the same results. It is in our best interest [and mandated by our funding agency and the journals] to publish papers that clearly define how we made our observations and how we analyzed the data.

"legitimate" is such a disrespectful value judgment. Are you saying that people who do data analysis with Python are illegitimate? Are you calling them bastards?

No, seriously, you can have a profitable conversation all about the reasons why you think there are serious drawbacks to using Python as your data analysis tool. Lots of people might benefit from that. But when you start saying things like "That's not a legitimate data analysis tool" or "That's not a real programming language" or whatever, then

legitimate" is such a disrespectful value judgment. Are you saying that people who do data analysis with Python are illegitimate? Are you calling them bastards?

I'm not sure that's how it's meant, but I agree, it's an odd choice of phrase. If I were to look at it another way, what would make a language 'illegitimate' for data analysis? In that case you look at things like excel and access for financial transactions, or some of the early versions of CUDA that didn't support proper IEEE floating point maths (or at least, not fast IEEE floating point maths). In those cases you can use the language, and it will spit out results, but they might not be right, and ther

Just ask the astronomy community. They've been moving away from IDL as an analysis environment and towards the use of python with scipy (with numpy and pyfits offering similar performance). You're asking this question several years after it's already been effectively declared as such.

This story seems like an echo of the one a day or so ago about Linux being critical to the success of the LHC. Something with generic programmability supports something specific, then gets discussed as a tool for that specific task. Probably a lot of the comments there apply here.

I love R and Python. However, both of them choke on big data sets. What they need is an in-built mechanism to store data on disk rather than in-memory. There are some really convoluted ways of doing this..but then dont always work with modeling packages that weren't written with the convoluted approach you are taking, in mind. So, if the base language has the ability to store object on disk, say with a simple flag, and its transparent to the rest of the system, most downstream libraries/packages would still