GitLab to BigQuery

This page provides you with instructions on how to extract data from GitLab and load it into Google BigQuery. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is GitLab?

GitLab offers a web-based Git repository manager with version control and issue tracking features.

What is Google BigQuery?

Google BigQuery is a data warehouse that delivers super-fast results from SQL queries, which it accomplishes using a powerful engine dubbed Dremel. With BigQuery, there's no spinning up (and down) clusters of machines as you work with your data. With that said, it's clear why some claim that BigQuery prioritizes querying over administration. It's super fast, and that's the reason why most folks use it.

Getting data out of GitLab

GitLab provides a REST API, but it says, "Going forward, we will start on moving to GraphQL and deprecate the use of controller-specific endpoints."

Most of the items stored in GitLab are accessible through the API. Dozens of items are on the list, including merge requests, project milestones, and todos. As an example, to get a list of repository branches for a particular project, you could call GET /projects/[id]/repository/branches.

Sample GitLab data

GitLab returns information in JSON format. Each JSON object may contain more than a dozen attributes, which you have to parse before loading the data into your data warehouse. Stitch provides documentation on some of the GitLab table schemas. Here's an example of what some of the data for that call to return all tickets might look like:

Preparing GitLab data

If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. GitLab's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you'll likely have to create additional tables to capture the unpredictable cardinality in each record.

Loading data into Google BigQuery

Google offers an overview document that covers loading data into BigQuery. Use the bq command-line tool, and in particular the bq load command, to upload data to your datasets and define schema and data type information. You can learn how to use bq from the Quickstart guide for bq. Iterate through the process as many times as it takes to load all of your tables into BigQuery.

Keeping GitLab data up to date

At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.

Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in GitLab.

And remember, as with any code, once you write it, you have to maintain it. If GitLab modifies its API, or the API sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to. If GitLab makes the REST API obsolete and moves ahead solely with GraphQL, you may have to start from scratch.

Other data warehouse options

BigQuery is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, PostgreSQL, or Snowflake, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To Postgres, To Snowflake, and To Panoply.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your GitLab data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Google BigQuery data warehouse.