At Wehkamp we use Apache Kafka in our event driven service architecture. It handles high loads of messages really well. We use Apache Spark to run analysis and machine learning.

When I work with Kafka, the words of Mark van Gool, one of our data architects, always echo in my head: “Kafka should not be used as a data store!” It is really tempting for me to do so, but most of the event topics have a small retention period. Our data strategy specifies that we should store data on S3 for further processing. Raw S3 data is not the best way of dealing with data on Spark, though. In this blog I’ll show how you can use Spark Structured Streaming to write JSON records on a Kafka topic into a Delta table.

Note: This article assumes that you’re dealing with a JSON topic without a schema.It also assumes that the buckets are mounted to the file system, so we can read and write to them directly (without the need for boto3). Also: I’m using Databricks, so some parts are Databricks-specific.

To make things easier to understand, I’ve made a diagram of the setup we’re trying to create. Let’s assume we have 2 topics that we need to turn into Delta tables. We have another notebook that consumes those delta tables.

Each topic will get its own Delta table in its own bucket. The topics are read by parametrised jobs that will use Spark Structured Streaming to stream updates into the table. The update jobs can run every hour or continuously, depending on your needs. The job will save the Kafka group id, so it will read every message only once.

The notebook that needs the topics, connects to the delta table and consumes the data. This way the notebook becomes decoupled from Kafka.

To write the Delta table, we need 3 settings: the location of the delta table, the location of the checkpoints and the location of the schema file. We will use a convention to get these locations, based on the name of the topic:

The Kafka topic contains JSON. To properly read this data into Spark, we must provide a schema. To make things faster, we’ll infer the schema only once and save it to an S3 location. Upon future runs we’ll reuse the schema.

Now we can finally start to use Spark Structured Streaming to read the Kafka topic. The function we’ll use looks a lot like the infer_topic_schema_json function. The main difference is the usage of readStream that will use structured streaming.

First, we need to make sure the Delta table is present. Here is where we can use the schema of the dataframe to make an empty dataframe. This dataframe can create an empty Delta table if it does not exist.

We want to update or insert all the columns of our dataframe into the Delta table, so we are using whenNotMatchedInsertAll and whenMatchedUpdateAll. More info can be found in the documentation of the DeltaMergeBuilder.

I’ve shown one way of using Spark Structured Streaming to update a Delta table on S3. The combination of Databricks, S3 and Kafka makes for a high performance setup. But the real advantage is not in just serializing topics into the Delta Lake, but combining sources to create new Delta tables that are updated on the fly and provide relevant data to your domain.

We’ve seen an uplift in the performance of scripts that used to query Kafka themselves (some had an uplift of 25%). The delta table is faster and makes code easier to read

A big shout-out to Jesse Bouwman (Junior Data Scientist) and Martinus Slomp (Senior .NET Developer) for the collaboration on this code.