Tag: mysql

If you’re familiar with the expression, or perhaps have seen the eponymous film, you understand the idea of something with far less importance or weight driving a much bigger process. In the film’s case, the expression was used to characterize a completely fabricated war shifting attention away from an actual scandal. For our purposes here, consider it this way: a business purchasing their end-use BI tool before crafting the strategy behind what they want and how they want to use it.

It’s a tempting situation. Vendors do a very good job of promoting their business intelligence tools, and there’s nothing wrong with that. But a company can’t rely on that alone to solve the big questions. You wouldn’t buy a dishwasher and then build a house around it…so why rush to invest in a BI tool before you’ve determined exactly what you want out of it and what questions the business wants to answer?

This over-reliance on proprietary tools has, at least for me, encouraged a focus on open-source BI tools. My most common tools of choice are MySQL for relational databases, RStudio for ETL and analytics, Shiny for R-based deployable visualizations, Orange for GUI-based analytics, and Git for source control. There are other tools, to be sure, and the beauty of the open-source sphere is the constant evolution. Beyond that, you are guaranteed not to invest in a proprietary solution that will be obsolete in a few years.

But more importantly–and where this fits into my point of wagging the dog–an open-source solution allows your company to pilot potential tools and solutions without the same level of risk and investment a proprietary solution may yield. I have seen companies invest plenty of money in proprietary solutions before they thought through the business process and wound up spending a tremendous amount time and money trying to make that solution work for what they needed even after they realized the tool was not right for them. They let the tail wag the dog.

Software is a tool, not a solution. Be sure you know what a tool needs to do for you before you choose it.

Those who have been in any sort of sociological research field should very familiar with the available survey platforms out on the web now (e.g., SurveyMonkey, SurveyGizmo, or LimeSurvey). Getting your results usually involves a multi-step generate/export/import cycle. Is there a better way?

I asked the question when using R to digest a survey deployed on SurveyGizmo. With so many R packages out there, I had a hunch there was something to help me get my results from SG into R without having to run through the generate/export/import cycle. Enter RSurveyGizmo, a package that does exactly that.

Beyond aggregates and analytics, the survey results in SurveyGizmo should be stored elsewhere for future use. This raises more questions about ETL from the website itself to your database of choice. In this case, let’s assume we have a MySQL database running on Amazon AWS. I recommend this over a MSSQL instance because of the difficulty of using an ODBC connection on anything other than Windows (but it can be done).

Part I: SurveyGizmo

Log into your SurveyGizmo account and head over to your API access options. Find that under Account > Integrations > Manage API.

If you don’t have an active API key listed, Create an API Key. You will then see the API key listed for your user account. Copy that key to a text editor, as you will need it momentarily.

Go back to your SurveyGizmo home page and view the surveys you have out there. Choose one and click on it.

You’ll be taken to the survey build page and the address will be something like https://app.surveygizmo.com/builder/build/id/xxxxxxxwhere xxxxxxxis a unique number. Copy that number to a text editor, as you will need it momentarily too.

Part II: R + SurveyGizmo

Install RSurveyGizmo via devtools.

library(devtools)
install_github(repo="DerekYves/rsurveygizmo")

Construct the script to grab your survey. You will need the API key and survey number.

You will see loading progress and, depending on the size of your survey, will have a frame full of data in just a few moments. (Sometimes I get a JSON error, but it resolves itself in a few minutes.) SurveyGizmo does have API call limits, so please be judicious with how many times you do this. It’s generally good to run the process once you have enough data to start writing your analytics scripts, then again once the survey is closed.

This is the simplest of the methods in the RSurveyGizmo package. You will want to explore the package documentation to learn all it can do for you.

Two common methods are dbWriteTable and dbSendQuery. As you might expect, to write an R data frame to a table in your MySQL database, you use dbWriteTable:

dbWriteTable(con, "table_name", dataframe.name, overwrite=TRUE)

Using overwrite=TRUE means your table is essentially dropped and recreated, rather than appended.
To get an existing MySQL table into a new R data frame, you’d use dbSendQuery:

newframe = dbSendQuery(con, "SELECT * FROM mydb.mytable")

Here’s a wrinkle, though. SurveyGizmo downloads come with concatenated column names that may not be very helpful. I prefer to convert all my column names to a standard format and establish a reference table with all the original questions matched up. The following script grabs all the column names from an existing data frame and creates a table with a standard “qxxx” format matched to the original question name.

# get question text into vector
Question_Text <- colnames(mydata.original)
# get length of that vector
sq <- length(Question_Text)
# generate sequence based on that length
QKey <- sprintf("q%03d",seq(1:sq))
# make a new data frame with the QKeys matched to the original question text
mydata.questions <- data.frame(QKey, Question_Text)
# replace original question text with the those keys
colnames(mydata.original) = as.character(QKey);

Now you have two frames: mydata.original with uniform column names, and mydata.questions with those column names matched to the original text.

Assuming you want to get those frames into your MySQL database, use the following: