The Information Lab is a team of passionate Tableau software professionals. We are one of the longest standing Tableau partners in the UK with experience of all aspects of the Tableau product suite.

Our team are skilled at working with data and we are all certified Tableau consultants.

Like most Tableau evangelists, I first discovered the power of Tableau through constant frustration with data report authoring using classic spreadsheet applications. Since then I've thoroughly enjoyed helping businesses see and understand their data, as well as making full use of Tableau Public bringing public datasets to life.

Specialties: Tableau Data Visualisation, Dashboard Design, SQL, Excel

Experience

Dec
2011 - Present

CTO / The Information Lab

Helping clients implement, create and understand exciting dashboards with Tableau data visualisation software. Whether it's training or authoring, my goal is to help people make sense of data.

Jul
2007 -
Nov
2011

Client Integration Manager / YorkMetrics Ltd.

Since joining YorkMetrics in 2007 and I have produced a wide range of measurement solutions to fit with challenging customer IT infrastructures. As the team’s client integration manager I head up the use and installation of YorkMetrics’ products, and specialise in both Tableau desktop and server.

Feb
2008 -
Jul
2011

Webmaster / SSEI

Implement and manage the public website and internal content management system for the Software Systems Engineering Initiative.

1. What is it?

The Tableau Web Data Connector (WDC) is essentially the Extract API embedded directly into Tableau Desktop with javascript used as the wrapper to send data to it. This means that connector authors around the community can easily build Tableau connectors for any data source accessible via javascript but not currently listed in the native Tableau connections.

When you think javascript you may think that these are going to be connections to websites, web services, APIs, etc. and you’d be right, however javascript is able to work with local files too meaning WDCs could parse local XML & JSON files for example.

2. So I need to learn javascript?

In order to make use of a WDC you need no programming experience what so ever. If you’re aware of a WDC you want to make use, of all you need to know is the URL to access it at. Simply select ‘Web Data Connector’ from the connection list, enter or paste the URL of the WDC into the appropriate window and follow instructions presented by the WDC. Each WDC is unique and its interface depends on how the author has designed it.

If you wish to create your own WDC you will need to know some javascript, although that in no way means you need to be an expert before you get started. If you have even just a small amount of programming experience you’ll find it relatively straight forward to apply it to Javascript. Indeed one of the most popular programming techniques is the ‘Google programmer’, somebody who solves problems and learns methods by simply searching Google. Finally if you want to get into WDC programming but are totally lost and you’ll be at the Tableau Conference 2015 in Vegas in October be sure to go to my session The New Tableau Web Data Connector: APIs, JSON & Javascript for Dummies.

3. So how is the data stored? Is it live?

While the connector may be pointed at what feels like a live database (via a web service API for instance) the data is always stored locally as a Tableau Data Extract and so is only ‘live’ at the point of connection. That doesn’t mean that the data can’t be refreshed either manually or scheduled on a Tableau Server (version 9.1 or greater), as long as the connector author has stored the relevant authentication credentials as required to continually access the resource.

4. Availability?

The Tableau Web Data Connector is expected to be released with Tableau 9.1 and as I type is currently available for testing by end users through the beta program (to get access to the beta program as a Tableau customer contact your account manager or email beta@tableau.com). While presence in the beta program doesn’t guarantee release in 9.1 it is a good sign. Really it depends on user feedback and unexpected instability from working with other people’s code.

5. Are there any prebuilt connectors I can play with?

Sure there are! If there’s anything the Tableau community is known for it’s adopting new versions fast (even betas) and making the most of new features.

To make use of one of these connectors fire up your latest beta copy of Tableau 9.1, select Web Data Connector from the connections menu, copy and paste one of the above URLs into the connector screen and follow instructions.

(If you’re a WDC author and I’ve missed you off drop the link in a comment below and I’ll add your handywork to the list)

A lot of focus in the use of data in sport is centred on talent identification, team/player performance or human performance/sport science. However, calling on my 9 years’ experience working within elite football, I can certainly state that this is just a small part of the work that goes on behind the scenes at a football club.

This is because researching, planning, implementation and reviewing of strategies and processes takes up a huge amount of time.

One aspect of this planning is the arranging of which players to scout, establishing when they are playing and identifying the nearest scout to the location of the match, before then requesting tickets to that match.

How can Alteryx help me?

Here we are going to focus on one specific area of this process – establishing the nearest scout to a match.

I’m going to do this using the “Find Nearest” tool in Alteryx.

First, let’s start with our databases. We have two CSV files, one which has the stadium name, stadium capacity, city, country, longitude, latitude and competition (league) of all 92 clubs in the English football leagues for the 2015/16 season (you can download this csv file here).

The second database contains a list of scouts along with their home city, postcode, latitude and longitude. Note: I used ukpostcodedata.com to find the longitudes and latitudes of each of my scouts based upon their postcodes.

What’s my workflow?

Create a new workflow in Alteryx and introduce your two CSV files to the stage by using two “Input Data” tools. Then drag-and-drop two “Create Points” tools and connect your “Input Data” tools to these.

The “Create Points” tool creates a point-type spatial object from two input fields (ie. Longitude & latitude or X & Y). In our example we need to select “Longitude” as the X Field and “Latitude” as the Y Field for both data streams (Clubs and Stadia Locations and Scouts Locations databases), we also need to define these as lat/long floating points. By default, these new spatial points will be given the field name “Centroid”.

Now that we have our spatial points for every scout and for every club stadium, we can use the “Find Nearest” tool to find the nearest scout to each stadium.

Connect the Clubs & Stadia data stream to the Target (T) input of the “Find Nearest” tool and the Scouts data stream to the Universe (U) input. We then need to select the centroid for the ‘T’ & ‘U’ inputs in the configuration menu. In this example I have chosen to find the 3 nearest points (which will find my 3 nearest scouts to each stadium). I have also taken this opportunity to rename a couple of the fields (as you will see in the configuration window).

From the “Find Nearest” tool we will obtain the names of the three nearest scouts for each ground, their distance from the ground and the direction (ie. north west).

All we need to do then is to connect the ‘M’ output of the “Find Nearest” tool to an “Output Data” tool and configure our output settings, for this example I have created a .tde for use in Tableau.

Now for tableau…

I have connected to my Alteryx workflow output and also a second datasource in Tableau, which is a list of all matches for the 2015/16 season and blended the data on Club = Home Team. This then links the home team of any match to the stadium location and nearest scout details of that specific match.

It is easy to represent where a project starts or where it ends using Tableau’s Gantt chart. However, when we want to look at the duration of a project it can be quite difficult to create the visualisation.

For example, imagine you are creating a timetable for lessons. There will be multiple different lessons within a day, so there will be multiple start times and end times for each lesson.

You may be familiar with the DATEDIFF function which calculates the difference, at a given time level, between two time periods.

E.g. DATEDIFF(‘minute’,[Start],[Finish])

If we use this calculation to work out the difference between the start and end of a lesson we should then be able to use this as the size of the Gantt bar. This should show the length of each lesson as a bar, however it’s often at this point things don’t look as they should:

What we want:

What we see:

Not what we want. What you may notice in this image is that the date range along the bottom spreads from Jul 17 to Nov 14. This is an indication of what may be going wrong here. If you look at the tooltip here you’ll notice that the datediff calculation shows quite a large number. The size of the bars are actually the size in days, therefore these numbers are too large.

As the datediff calculation is in minutes, what you’ll need to do is divide the datediff by (24*60) to make the size of the bars measurable to the day. (If the calculation is in seconds you’ll need to divide it by 24*60*60)

Now what we see is:

Still not quite there. The issue here is that the “Start” is at the wrong level of time detail. I added the Start Time to the columns shelf and chose the continuous ‘day’ level. What is actually required is the ‘exact date’, this will then give you the desired result.

It’s always worth checking what these fields contain using the Browse Tool and matching each field to the JSON_Name field. What you’ll find in the JSON_ValueString is all the date records amongst other information, in the JSON_ValueFloat all the data and some redundant data in the JSON_ValueInt and JSON_ValueBool fields.

This output, therefore, requires a few different steps to reach the same end result as before.

Essentially what we need to do is to separate the dates from the JSON_ValueString and the data records from the JSON_ValueFloat into two sets and then join them back up at the end.

After using the Text To Columns Tool, the first step is to cross-tab the data.

Before using the Cross Tab Tool I removed all of the blank rows that correspond to the date by using a filter to remove any rows where JSON_Name3 = 0

If you are trying to download a more detailed dataset, for example stocks, then you need a Select Tool to change the JSON_Name2 into an integer data type otherwise the cross-tab will sort the data alphabetically, as it reads the field as a string.

It’s also worth removing some of the unwanted fields at this stage too.

There should now be 8 records coming out of the Cross Tab Tool.

The date part of the data can be extracted using a Filter Tool to remove all Null rows from the JSON_ValueString field.

What I also like to do is convert strings into date format using the DateTime Tool. This will further prepare the data, ready for analysis in Tableau

Lastly add a Record ID Tool to both data sets and join them, using this ID field with a Join Tool.

In order to get the column names you’ll need to extract them from the metadata again.

Use a filter tool to select only JSON_Names that contain “column_names” then select just the JSON_ValueString and use the Dynamic Rename Tool to relabel the headers.

In an upcoming blog I’m going to explore downloading multiple data sets using this approach, organize them and use a trellis plot, all created in Tableau, to analyse correlations between downloaded API data.

Stay tuned for more examples of connecting Alteryx to an API. If you have any questions or comments, please use the form below or get in touch using info@theinformationlab.co.uk

This blog post snowballed in my head from a conversation last night with my colleague Carl Allchin (or to give him his full title: Carl Allchin Data Ninja) based on some work we did yesterday at our Data School, so thanks to Carl and the Data School gang for seeding the idea for something that really needed putting out on the blog.

Let’s start with a test, please tell me what Level of Detail (i.e. what each mark represents) in the visualisations below:

Now try again, what does each mark represent:

(incidentally this visualisation shows what the UK would look like if we asked only people who bought their house in 2014 to turn on their light – of course this suffers from the traditional mappers curse that it really only shows population density….)

Easier second time round, right? What was the difference….well we could use the Detail Shelf to check what dimension was controlling our view in the second visualisations. In the first ones we were telling Tableau not to aggregate the measures and so we lost the visual indicators as to the Detail level.

Aggregate Measures

You’ll find the option to Aggregate Measures in the Analysis Menu. If you haven’t seen it before go find it….I’ll wait….back already? Enjoy that? Good, now walk away from the menu item, back away…further. That’s right, you wanna forget about this sucker. Push it right to the back of your mind.

Why shouldn’t you touch Aggregate Measures….yet

The Aggregate Measures option is the hot sauce of the Tableau world. Use it sparingly. By “use it” I mean leave it as the default, On, never turn it Off. By default it’s On and that is for a very good reason. Tableau loves to aggregate. Yes, it’s what it lives for, it’s the default behaviour and by golly it’s the reason we love Tableau.

Yes, aggregation can be confusing. Double click two measures and you get a scatterplot of a single mark; that blows new users’ minds. However, now we’re in control; we can drag and drop onto the Detail Shelf. Want to look at that scatterplot by Category? Go ahead drag it onto Detail and there, Tableau just does it. Let me just reiterate You are in Control with aggregated measures.

When I see new users playing with the Aggregate Measures option, turning it off, more often than not it’s because they lost control. They are fumbling with what Tableau is showing them and they’re trying anything to get the view they want. If you turn off Aggregate Measures you are letting Tableau take control of what it shows, it is showing row level data, and so you’d better be sure that that is what you want.

Let’s look again at the second image above, the scatterplot, it’s from the Tableau Superstore Sales sample data. What is the row level of Superstore Sales?

You might be tempted to say Order level. But let’s look at that other view where I’m in control….

It’s very subtly different!

Yes, there are multiple rows for some Orders in the data. So perhaps it’s basket level….anyway regardless of what level the data is, don’t you want to stay in control?! It’s much easier to read (as the first exercise showed) and it’s much easier to explain!

So the key lesson here is that unless you really, really know what you want to do then leave Aggregate Measures alone.

Don’t be tempted by Dimensions!

Now then, we also need to talk about converting a measure to a Dimension to work around this “detail issue” because it’s tempting. I’ve done it. And down that route madness lies……Let me show you why….

In the images below I’ve set my view to have sales as as a dimension (as you can see)

I’ve added an average line, my aim is to show the range of Sales we’ve made per Basket and I want to show the average over all Categories. Fine, job done, but wait…here’s the same view using a sales as a Measure and with Aggregate Measures turned off..

Notice the Average is different…..Whaaat?!

The key to realising the difference is realising that as a Dimension Tableau will show only ONE mark for each value, however as an (un-aggregated) measure Tableau will show as many marks as there are for that Sales item. e.g. if two items have been sold at $5, Tableau will show two marks, one of top of the other…..in the first view it will show one. Meaning the Average is skewed in the former case.

Here’s a simpler example using only three sales Values: 1, 1 and 3

3 rows of Data

With Sales as a Dimension we have two Marks so the average is (1+3)/2 = 2

Now we have three Marks (two have Sales 1 so they’re on top of each other) so the average is (2*1 + 3)/3 = 1.6666

So be very careful using a measure as a dimension in order to remove aggregation as it can lead to pitfalls.

Why use Aggregate Measures then?

So after all this “stay away” business, why is the option there? Well it’s all about giving the advanced user control over their visualisation..in the right hands you can shortcut to the Viz you want using Aggregate Measures very quickly. But there’s another good reason:

Let’s Talk Performance

Yes there’s another very very good reason for turning off Aggregate Measures vs using the Detail shelf to control the level of detail and that is performance. When I was building the House Price map for this blog I build the same view twice, two different ways, and there were nearly 900,000 marks in each. Think about what happens in each situation:

Aggregate Measures OFF = “Hey Mr Database give me every row you have”

Aggregate Measures ON = “Hey Mr Database can you group by Transation ID and give me the Average Latitiude and Longitude”

The result is the same, but in the latter case the database has to do a lot more work. In my test, albeit a sample of one, Aggregating Measures ON was over 10x slower.

Hot Stuff!

So remember there are elements of Tableau that are like Hot Sauce, use them sparingly and you’ll enhance your experience; go overboard and you’ll have smoke coming out your ears before you know it!

In the first blog in this series (Joining Excel Worksheets) I touched upon the required structure of a CSV or Excel file to be able to maximize the potential of the data, to analyse the data at its most granular level and for us to visualize our data using Tableau (more information on this here).

So what if our data isn’t in the required format, what if we have workbooks upon workbooks that contain cross-tabs?

When cross-tab meets Alteryx

Reshaping our Excel data is a breeze when using Alteryx, no opening of multiple workbooks, no vlookups and no copy and pasting, just a few drag-and-drop tools between you and your insights.

In the below example we have two excel workbooks with (made-up) cricket data; one worksheet that contains runs scored and one worksheet that contains balls faced.

The first thing to note is that each worksheet contains a cross-tab (table of data) along with header rows and a title within the worksheet. The second factor to note is that the “Balls Faced” worksheet also has an additional blank row in row 2.

This formatting and the minor differences between workbooks (ie. additional blank row) is a common feature of many Excel workbooks that exist in sports clubs, usually due to different people editing them or even just a lack of understanding and appreciation towards the importance of a standardized format.

Now let’s start the process of turning these two separate cross-tabs in to one single comprehensive database (note: you could do this with as many different cross-tabs as you desire) .

First we use two “Input Data” tools and connect to the “Runs Scored” and “Balls Faced” worksheets.

I have added a “Browse” tool after each input, this enables you to preview the data coming through the workflow. You can see in the above example that, due to the different structure of each Excel worksheet, the “Browse” tool shows two slightly differently structured outputs.

Also note that the first row of data from the excel workbooks has been identified as a header in the “Runs Scored” worksheet but as the first row of data in the “Balls Faced” worksheet.

Next we need to remove the unwanted rows at the top of our cross-tab – for this we can use the “Sample” tool. Connect your “Input Data” tools to the respective “Sample” tools and select “Skip 1st N Records” in the configuration. On the “Runs Scored” data we need to set N=1 and on the “Balls Faced” data we need to set N=3 to account for the additional blank row and the fact that no header row was identified when connecting to that data.

You will see in the browse windows (screenshot above) that we have now removed the unwanted rows but are left we the header rows as the first row of data. This is where we introduce our next tool to the workflow, the “Dynamic Rename” tool.

Using the “Dynamic Rename” tool we are able to quickly rename any or all of the fields within the input stream using a variety of methods. In this instance we are going to choose the “Take Field Names from First Row of Data” mode from the drop-down selection in the configuration window. This will rename the fields with the data from the first row (as can be seen in the browse windows below).

We can then use the “Select” tool to rename the field “Field_12” to “Player” to give it a clearer meaning.

The next step is to turn each of our cross-tabs into the row-by-row structure we initially wanted. Using the “Transpose” tool we can reshape the data and pivot the cross-tab. Our key field in this instance is the “Player” with all of the matches/innings being the data fields.

So now we have our two original cross-tabs in the structure we want, all we need to do now is join the two data streams to provide us with the comprehensive database we require. To do this we will use the “Join” tool. This enables us to join two data streams together on specified fields, which in this case is “Player” and “Name” (which is the match/innings name). I will also take this opportunity to use the rename and type options in the “Join” tool’s configuration to change the Runs Scored and Balls Faced fields to the correct name and to be integers.

All we need to do then is to export our data to the required file (in this case a .tde) and there we have it. In one short workflow we have taken two related cross-tabs from two separate Excel workbooks with slightly different structures and combined them in to one database that will enable us to gain a deeper understanding of our metrics and their relationship with each other.

So you have your athletes or players and you have their data, ranging from hydration and saliva scores through to wellbeing questionnaires and GPS data. Each dataset being collected by a different member of backroom staff, all stored in different excel workbooks with athletes being called something different in each workbook. Alongside this you have .csv files from external providers such as Opta, Tracab, Prozone, Catapult and StatSports.

And then the question comes from your boss…

Is there a correlation between hydration scores and wellbeing questionnaire results?

Does the intensity of our training sessions have an impact on match day output?

So time to jump into excel and spend your valuable time looking through multiple workbooks, establishing the structure of the workbook, identifying what the creator of the workbook has called each athletes before you go and implement various formulas, including multiple vlookups, along with no doubt a decent chunk of copy and pasting between sheets. Let’s hope you don’t have anything important for the next few hours, like a training session or a match.

If only there was a more efficient and quicker way of doing this. Well, there is…

Welcome to Alteryx

Firstly, let us understand what Alteryx is in it’s most simplistic form. Alteryx provides a visual means for putting your thought process in to action, much like a flow chart.

Before we start, it is important to know of the ideal structure of data in an xls or csv file.

For best practice we should have our data in a row-by-row format, where each row contains a new line of information. So in our examples below there is a new row for every athlete and then for each athlete there is a new row for every date. We then have data about one piece of information in each column (ie. hydration score, distance etc…).

We should have the data in as granular (unaggregated) format as possible to allow us to explore the data to a deeper level.

Now let’s take a look at this scenario for instance:

You have a hydration database completed by your sport science intern, with the athlete’s name, the date and their hydration score.

Then you have your csv exports from your GPS technology provider and your match data provider, with each athlete’s physical output for each date.

How do you even start to look at these three different sets of data in a holistic and cohesive manner? Particularly when you consider the above examples are just a limited number of metrics for just three athletes for just one week, how would you even go about doing this for say 30 footballers, across 46 weeks with 200+ metrics? Let’s find out…

What do I want to do?

As previously mentioned, Alteryx provides a means for you to visually construct your thought process, so let us approach the above scenario in this way.

Step One: I have three databases (hydration, GPS & match data).

Let’s introduce these three databases to the workflow then.

By dragging the “Input Data” tool on to the workflow and selecting the relevant file from the configuration options and repeating this for each database, I have established my three databases.

Step Two: I need to make my databases cohesive

As mentioned earlier, many databases within most sports clubs have different athlete’s names for the same athlete, in this example we have Joe Bloggs, J Bloggs and Joseph A Bloggs. So we need to ensure that our final database recognises these three different names as the same athlete. We will do this by creating a unique PlayerID for each athlete.

There are numerous ways in which we could do this, but I’m going to approach this using a control database created in Excel.

This Excel sheet has a row for each athlete with a unique PlayerID and the various names an athlete is known by. We then need to add this “Names database” to the workflow as below.

Once we have done this we can use the “Find Replace” tool and connect the databases (hydration, GPS or match data) to the “F” input and the control database (Names database) to the “R” input.

Select the target field (PlayerName) and source field (Match Names/Hydration Names/GPS Names) and choose the option to “Append Field(s) to Record” and select “PlayerID”.

Once we have done this for each database then we will have a clear unique ID for each athlete(PlayerID).

Step Three: I need to join my databases together

Our next step is to join our databases together so that we have one complete database with all the required metrics as to approach our analysis in an all-encompassing manner.

We do this by using the “Join Multiple” tool and join by specific fields. In this case we want to join on our newly created “PlayerID” field as well as the “Date” field as shown in the configuration window below.

You will also see that I have used the configuration window to remove some unwanted fields by unticking them due to the fact that they are the various names for each athlete in each database and also a “Date” field is being carried through from each of the original three databases.

To finish this configuration I have also renamed (by typing in the “Rename” column) the GPS and match database fields as to ensure the end user can clearly distinguish between training output and match output.

Step Four: I need to visualise my data

Our final step is to output our new database as .tde ready for use in Tableau.

Here we use the “Output Data” tool and select the location to write the file to along with the and file format (.tde).

Success

And there we have it, you now have a workflow that combines all of your sports data together in to one comprehensive database, allowing for a greater level of analysis to be carried out within Tableau, leading to greater insights and a deeper understanding.

I recently produced the visualisation below showing Averages and Detail on the same scatter plot, take some time to look at it then download and try to recreate the same view with the data. Don’t cheat.

Definition of Type 2 SCD

This method tracks historical data by creating multiple records for a given natural key in the dimensional tables with separate surrogate keys and/or different version numbers. Unlimited history is preserved for each insert.

For example, if the supplier relocates to Illinois the version numbers will be incremented sequentially:

Key

SCode

Supplier

State

Version.

123

ABC

Acme Supply Co

CA

0

124

ABC

Acme Supply Co

IL

1

Another method is to add ‘effective date’ columns.

Key

Code

Suppliere

State

Start_Date

End_Date

123

ABC

Acme Supply Co

CA

01-Jan-2000

21-Dec-2004

124

ABC

Acme Supply Co

IL

22-Dec-2004

The null End_Date in row two indicates the current tuple version. In some cases, a standardized surrogate high date (e.g. 9999-12-31) may be used as an end date, so that the field can be included in an index, and so that null-value substitution is not required when querying.

Alteryx Solution

The data, I used start / end dates and also a version flag to show which was current (initially everything is current), a variation of that shown in the wikipedia article above:

The steps I took:

1. Import the data I want to import (in this case I used a Text Input tool but you could use any data source that matches the data table you need to import )

2. Use a Dynamic Input tool to connect to the current data that already exists, in this case I want where the [Row ID] is in my input data (i.e. I’m updating that row) and where it is the current row (I used a [Current] flag set to 1 for that but have equally checked where the End Date was 2099-12-31 – the default).

If you aren’t familiar with it the Dynamic Input tool can be used in many ways, in this way I am using it to only return certain rows of a much larger table. In order to do this I first define a sample Input, using what looks like a standard input option, in this case I connected to a database and used the following query:

i.e. selecting the Current row and using 99 (more on why in a sec) for the [Row ID] (our primary identified of the rows in this case)

Next I chose to update the 99 dynamically based on the rows we’re actually updating, to do this choose the Modify SQL Query option and choose Update Where Clause in the drop down:

I want to update that 99 part of the query and change it to the values in our database, Grouped into a IN clause (e.g. if there are values 1, 2 and 3 it will replace the query with WHERE [Row ID] in (1,2,3) and [Current] = 1

Here’s what the tool looks like when I’m done:

3. These existing rows from the table need an end date of today and also need the current flag setting to 0 so they are no longer current. Simple formulae to do that:

4. In a similar way we need to add those fields on to our new data, but the start value needs to be today, with default end date and a Current flag of 1 to show the data is new.

5. We union the data to get it into one stream

6. We update existing rows and insert new rows using the output tool to the database:

Option 5 being the key one here, using the Update / Insert option we use the databases primary key (which in this case is set to Row ID \ Start Date – which is a unique combination) then it will update the existing rows (updating the current and end flags we set) and also insert the new rows.

The result:

I hope this makes sense, feel free to contact us if you have this requirement and can’t follow the above logic.

Welcome back! Glad to have lots of great feedback about this series and this time I return with a video on Action Filters, I want to help you to get the most out of them and truly understand some of what Tableau is doing under the surface and how you can tweak it to your benefit.

I know some people are using this series to try and create my examples, so this time I’ve given you time to pause the video and do that, then you can un-pause and watch my solution if you struggled.

Here is the dashboard we want to create, and I run through some interactivity features in the video below. Please don’t dwell on the aesthetics and best practice, or lack there of, in the dashboard, it is just an example to illustrate functionality. Enjoy.