The Goal is Data Products: Now How Do We Get There?

The primary output of data science is data products. Data products can be anything from a list of recommendations to a dashboard to a single chart or any other product that aides in making a more informed decision. In the end, data science should produce some usable results, and those results are the data product. The process used to created those data products needs a bit more formalization. Call it a: methodology, process, lifecycle, or workflow; but it needs to exist.

Data Science is not Software Engineering

First, data science is often treated as software engineering because code is written. However, they are not the same thing. Agile methods, waterfall, and scrum are not pluggable methodologies that can be used with data science. Data science is more science and less engineering; therefore it should follow a more scientific method.

Existing Data Science Workflows

Luckily, some options already exist for data science. Much like software engineering, there is not a magic workflow that fits every project. The goal is to find a workflow that best fits the needs of the current project.

CRISP-DM

The most popular and oldest method is CRISP-DM. CRISP-DM was designed for data mining projects, which is closer to data science than software engineering, but still not exact. The 6 steps of CRISP-DM are:

Data Science Workflow

Those are 3 options of workflows for data science. They are not the only options. Feel free to modify the workflows to best suit the project. It will be exciting to see the new workflows for data science that will be created in the near future. It will also be fun to see which ones turn out to be the most beneficial.

One thing a data product must do is help answer a question. Thus, a logical staring point for data science is a good question. Just don’t let the focus of the workflow come down to the process, which is often the case in software engineering. Let the focus be on data products.

Note:
I have previously written 2 posts on this topic, and I don’t think either post gets the methodology exactly correct.

I found the portion about Data Science Workflows really interesting, since I come from a software engineering background. The steps for the 3 workflows described above seem to follow a traditional waterfall approach.

In a Data Science workflow, do you find that it’s necessary iterate or revisit certain steps?

Raymond,
Great question! Most software engineering workflows are just improvements or modifications of waterfall. I think the modeling and evaluation steps might need to be iterated. Depending upon those steps, it might be necessary to revisit the preparation step to obtain more data or further clean data.

I believe so, but then my experience is mostly with software engineering and not very much with data science.

I find that things never move along in a waterfall manner except for the simplest projects. Instead, it’s iterative — build something, get some feedback, add/change what you built based on the feedback, get some more feedback. Rinse and repeat.

In the limited data analysis I’ve done to help solve engineering probelms, I frequently feel it follows a similar iterative approach. Prepare, analyze, reflect, get feedback (usually it’s not exactly what they want) and then repeat.