Google Cloud SQL to Grafana

This page provides you with instructions on how to extract data from Google Cloud SQL and analyze it in Grafana. (If the mechanics of extracting data from Google Cloud SQL seem too complex or difficult to maintain, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Google Cloud SQL?

Google Cloud SQL is a managed database service that lets DBAs set up, maintain, and administer MySQL and PostgreSQL databases on Google Cloud Platform.

What is Grafana?

Grafana is an open source platform for time series analytics. It can run on-premises on all major operating systems or be hosted by Grafana Labs via GrafanaCloud. Grafana allows users to create, explore, and share dashboards to query, visualize, and alert on data.

Getting data out of Google Cloud SQL

In most cases, the easiest way to retrieve data from relational databases is by writing SQL queries.

Google also provides a REST API for administering databases, instances, and other objects in Cloud SQL. So, for example, to retrieve a resource containing information about a database inside a Cloud SQL instance for a particular project, you could call GET /v1beta4/projects/[project]/instances/[instance]/databases/[database].

If your underlying database is PostgreSQL, you can use the pg_dump command to export data as a CSV-format flat file or a script that you can run to restore the database on any Postgres server. If your underlying database is MySQL, you can use the mysqldump command to export entire tables and databases in a format you specify (i.e. delimited text, CSV, or SQL queries that would restore the database).

Sample Google Cloud SQL data

The GET call we mentioned would return a database resource, which contains seven properties. Other API calls return different resources.

For data you export via SQL query, pg_dump, or mysqldump, you need a matching table in your data warehouse to receive the data from Cloud SQL. The information_schema database contains all of the metadata information you need to recreate your tables in another environment.

Preparing Google Cloud SQL data

If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Google's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you'll likely have to create additional tables to capture the unpredictable cardinality in each record.

Loading data into Grafana

Analyzing data in Grafana requires putting it into a format that Grafana can read. Grafana natively supports nine data sources, and offers plugins that provide access to more than 50 more. Generally, it's a good idea to move all your data into a data warehouse for analysis. MySQL, Microsoft SQL Server, and PostgreSQL are among the supported data sources, and because Amazon Redshift is built on PostgreSQL and Panoply is built on Redshift, those popular data warehouses are also supported. However, Snowflake and Google BigQuery are not currently supported.

Analyzing data in Grafana

Grafana provides a getting started guide that walks new users through the process of creating panels and dashboards. Panel data is powered by queries you build in Grafana's Query Editor. You can create graphs with as many metrics and series as you want. You can use variable strings within panel configuration to create template dashboards. Time ranges generally apply to an entire dashboard, but you can override them for individual panels.

Keeping Google Cloud SQL data up to date

At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.

Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Google Cloud SQL.

And remember, as with any code, once you write it, you have to maintain it. If Google modifies its API, or the API sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.

Easier yet, however, is using a solution that does all that work for you. Products like Stitch were built to move data from Google Cloud SQL to Grafana automatically. With just a few clicks, Stitch starts extracting your Google Cloud SQL data via the API, structuring it in a way that's optimized for analysis, and inserting that data into a data warehouse that can be easily accessed and analyzed by Grafana.