I routinely am briefed way in advance of products’ introductions. For that reason and others, it can be hard for me to keep straight what’s been officially announced, introduced for test, introduced for general availability, vaguely planned for the indefinite future, and so on. Perhaps nothing has confused me more in that regard than the SAS Institute’s multi-year effort to get SAS integrated into various MPP DBMS, specifically Teradata, Netezza Twinfin(i), and Aster Data nCluster.

However, I chatted briefly Thursday with Michelle Wilkie, who is the SAS product manager overseeing all this (and also some other stuff, like SAS running on grids without being integrated into a DBMS). As best I understood, the story is:

On Teradata, SAS is shipping in-database scoring today. SAS also is shipping a limited amount of in-database modeling on Teradata, the count recently having gone up from 4 “procs” to 10.

On Netezza Twinfin(i), SAS is shipping in-database scoring, and this was recently announced. I can’t actually find much evidence of this announcement by searching the Web or the SAS website, but Michelle was pretty clear on the point even so. Further confusing matters, SAS’ website seems to say in-database scoring is supported on Netezza’s old generation of products but not its latest one, even though SAS CTO Keith Collins told me exactly the opposite would be true.

On Aster Data nCluster, SAS will ship in-database scoring by the end of 2010. If I understood correctly, this will be for “limited” rather than “general” availability, but Michelle framed that as a distinction without a difference. I.e., if you want to buy in-database SAS scoring on Aster nCluster, you’ll be able to.

(More) in-database SAS modeling is expected on all of Teradata, Netezza Twinfin(i), and Aster Data nCluster in the vague future. (The concept of 2011/2012 came into play.)

SAS/Teradata integration, developed first, involved more hand-coding. SAS has subsequently developed some kind of a more general parallelism/in-database capability, akin to what it has in the DBMS-less grid, that either is or isn’t a good match for DBMS vendors’ native way of supporting parallel processing. (Obviously, I’m still pretty unclear on this part.)

SAS technology is a good fit for Aster Data’s MapReduce-centric way of doing parallelism.

I also took the opportunity to ask Michelle a question I’ve had a heck of a time getting answered: What’s the big-deal about in-database data mining scoring anyway? After all, the most common form of in-database data mining scoring is just to take a weighted sum of specific fields in a row, where the weights are the regression coefficients. You can do that in generic SQL, with performance that superficially should be at least as good as that for any alternative strategy. Michelle’s answers seemed to be twofold:

There are other kinds of scoring too — neural networks, etc.

Coding the scoring in SQL isn’t that easy. Michelle gave the example of a specific user (default Netezza reference account, with initials resembling mine) that spent 400 hours writing and testing something you now get for free with SAS/Netezza integration.

Many if not most of Greenplum’s customers already access Greenplum using the SAS/ACCESS interface. Moreover, we are working directly with SAS to enable in-database scoring. It’s on our product roadmap for this year, and we are actively discussing this with them right now.

All this is obviously critical for us – there’s a huge number of SAS users out there who have a rich library of SAS code, and we need to make sure they can work seamlessly within our ‘data cloud’.

It’s important to point out that our philosophy is that analysts need access to a wide variety of different tools and applications for doing deep analytics. This includes SAS, of course, and BI tools, as well as R, MapReduce and so on. More power to the modelers! And that’s not to forget SQL of course. In many cases, complex models or MapReduce jobs can be done with surprisingly simple SQL statements. (I’m happy to share some with you if you like!) In other cases, not. You need different tools for different jobs. And that includes SAS.

In db scoring is great. What about in db modeling? To build a model on all data still requires that I extract and use an external box or system to model, sometimes with all data in RAM (if using R, for example). If I have to model on smaller samples because of resources, then I still have the ETL and constrained view I have with current systems.

So, just scoring is a good first step… but I won’t consider the job done til I can build my models using the same parallel scalability.

Amen to that, Michael! That’s going to be a central focus for me and my team at Greenplum. We’ve already made pretty good progress (I can send you some samples if you drop me an email) and we’re working hard with engineering and with partners to develop more.

Curt Monash posted a nice summary of the current and planned offerings that help to make SAS analytics more available “in the database” — allowing you to analyze your data quickly without having to move it around so much.

[…] My recent post about SAS’ MPP/in-database efforts was based on a discussion in a shared ride to the airport, and was correspondingly rough. SAS’ Shannon Heath was kind enough to write in with clarifications, and to allow me to post same. With permission, I’ve also made trivial grammar edits. […]

Michelle Wilkie from SAS said, at the May 6 Aster Big Data Summit in Washington DC that Aster runs parallel instances of a SAS Data step on its nodes. I don’t recall her saying the following, but it would make sense: each instance would touch a subset of the overall data that the Data step would be manipulating with the results then recombined as needed or left in place, in the database. I believe she said the capability is shipping.

The SAS Data step is very roughly similar to one SQL statement or a sequence of SQL statements wrapped in a procedural language. It joins tables and subselects rows and columns based on some criteria and allows various mathematical operations on them.