How to setup your own personal cookie to store important marketing related information about the website visitor.

In moments that could only be described as Deja Vu I came across a solution to a problem I was having and it turns out – I’m not the only one who was having that kind of problem. I was sending website traffic to one of many of the company’s websites from a paid platform and I was not getting the conversion return that I expected. I tested, changed, and tweaked for weeks finding a new low in my acquisition confidence when I tried explaining the situation to a friend of mine.

He very meekly invited me to explain what I was doing and I gave him the ELI-5 treatment (explain it like I’m five years old). As the words came out of my mouth I realized that I was not tracking website visitors who navigated away from my landing page. I mean, they were tracked, but they weren’t attributed. I knew about Google’s multi-channel tracking capabilities and decided to see what the report would hold.

If I had a screen shot of that report it would look like a rainbow because all of the channels were color coded and my user reports were all over the place (in terms of channels). [See an example below from the Demo analytics account]

I remember that the top converting channel combination was simply, “Direct” and after that it was an average of 9 channels per conversion. I was in the, “Assisted Conversions” report and saw that although my advertisements were fighting to break even in the first month from last-click conversions they were more then carrying their weight by assisting future conversions.

So Google knows in one of it’s reports that my paid ads are generating conversions but not always in the ‘last-click attribution’ window of time. Using this information I realized that on average (at least 9 channels worth) I could get a conversion from nine interactions. Not that any of those nine conversions were valuable but it did tell me that my paid channels were generating value and I wanted that information passed into my CRM.

Here was the problem: how do I store and pass marketing attribution data into my CRM from those channels that simply assisted in the conversion to prove their value?

So let’s break this down into the elements of the question that we would need to solve for in order to correctly answer it:

How do I store marketing attribution data from the varying channels?

How do I pass that data into my CRM?

How can I make sense of the data to award credit to the proper channels in order to prove the worth of my paid acqusition efforts?

“How do I work with this in my situation?”

The first element to solve is to figure out how to store marketing attribution data from varying channels. There are likely many ways to solve this but I will show you how to create your own cookie in order to accomplish it that way. GDPR does not want anyone storing PPI data but the beauty of this is that you can store data related to the channels (UTM values) and this does not necessarily constitute PPI data. As long as you aren’t storing data that can lead back to a single customer you are not violating GDPR (call me on this if I’m wrong?).

So what information do we want to store?

Staying away from PPI we only need to store data right above the consumer level and the UTM parameters are exactly right above that. In terms of granular data I would use the UTM basics: the Campaign, the Source, the Medium, the Content (Ad Group/Ad Set), and the Keyword. None of those have PPI data in them as long as you do not add it yourself. In addition to the UTM parameters you will want to store this differently in your CRM by loading the values into fields in your CRM that correspond to the first time that those values were created and possibly the most recent time they were collected.

For example if the fields are empty, then populate them with the UTM values (call it first campaign, or first source, etc), however… if they are populated then populate another field (call it last campaign, or last source, etc) and append the value there regardless of there being a value in that field.

Here’s what you will be storing (feel free to customize this to however you want):

utm_campaign

utm_source

utm_medium

utm_content

utm_term

gclid

msclkid

The values with ‘utm’ in them will denote the UTM values each time the data is detected and the cookie is created. This will create the effect of a last-click attribution model. Every time a user visits from a different channel with UTM values in the url the values will update. You could forseeably add any data from a web visit or from a web parameter that you wanted but make sure you clean up your urls in Google Analytics with filters (that’s for another blog article).

Here’s what you need in order to store this data in a cookie:

Google Tag Manager with admin access.

The code scripts, variables, and any relevant tags published in GTM to deploy this.

Custom Cookie Installation Instructions:

Set the trigger to fire across ‘All Pages’ (unless you need a specific page-case).

Set the tag configuration to be a Custom HTML tag type.

In the HTML whitespace, you add the code below.

Name the tag to comply with your taxonomy structure in GTM and Save it.

At the top, click preview and then view the website where your container will fire.

In the web page, at the bottom (by default) you will see the preview mode, and make sure you see your tag name in the list of Tags Fired on This Page.

Next, tap the F12 on your keyboard (on windows) or open up the developer console (I’m using Google Chrome). Then hit the tab for Application, and on the left open the view for Cookies and select your domain.

When you look at the alphabetical list on the right you won’t see your utm fields and values… wait why? Because you must put the utm parameters into the url.

Now, try reloading the URL but first add the following parameter text to the end of your page’s URL (?utm_campaign=test) so it will look like (domain.com/page?utm_campaign=test).

Now when you scroll down in the application > cookies > domain view on the right you should see the field of utm_campaign and on the right of it, the value of ‘test’ – you did it!

Back in Google Tag Manager you can save or play around before you finally publish the changes to your live site.

Last, if you have a requirement to tell your visitors what data you collect, make sure you consult with your legal team what you need to add to your privacy policy and cookie policy.

Step 2: Collecting that data from cookie and storing it into your CRM

I can’t begin to know what CRM you use or if you even are using a CRM (you know who you are) but if you have digital conversions such as form fills, phone calls, chats, etc then you can collect this information. At the moment of filling out a form, making a call, initiating a chat, or whatever is the point where the data you have been storing in the cookie will be associated with that conversion.

I cannot cover a how-to for every type of conversion but I will detail the form because it’s the simplest and can be applied to the other conversion methods depending on what you do. You will capture this information using hidden form fields that will pull the data out of the cookie and store the information into your CRM at the point that the form is submitted. I would also make sure to add the information you are gathering to your privacy policy or at least make a mention below the form that you are gathering marketing data for the purposes of serving them a better experience.

Some form builders have built-in tools to pull data from a cookie or proprietary methods to get data that could require customized javascript or jquery (I just have the basics). For example I know Marketo has a hidden field that can pull from the cookie which makes it very easy for you to set this up. If you use HTML or have form fields that can be populated from a jquery snippet (and your web page accepts those languages) then you can use this method:

Add this script to your form, page, or to your form/landing page however you can:

Step 3: Making sense of all this data to paint a better picture of multi-channel attribution

The use cases are many, bringing me back to my Deja Vu. In the past few years since I figured this out I’ve run into dozens of fellow marketers struggling with a problem that this could solve. Not every solution ended up with them slapping together their own cookie but it was relief for them to figure out that there was a solution if they are willing to go through with it.

So here’s my warning: don’t make this, set this up, and then forget about it. You need to have an applicable use case for how you are going to use the data. Even if I tell you, this is amazing if you do it because it helped me do X, Y, or Z… you still need to know how it’s going to benefit you.

Knowing the attribution of your conversions is only half the picture. The other half is how you can impact the conversion for your higher-value customers. You should use this data to help you better understand the differences in your customers. Do not use the data to look at your customers in aggregate without considering that your customers fall into different segments. Users that turn into high-value, long retention, and high frequency buyers will be your best customers and so spend the effort (extra effort required) to understand them and where they come from.

Slice your customer data by value segments or retention segments and then take a look at your attribution channels to see what you find.

Optional Cookie Code I’ve Seen:

Why use the datalayer? You could use the datalayer across domains… and there are other reasons if you want to dig around.

Kevin Dieny

Marketing Professional

Here is a great resource for you if you want to learn more about the DataLayer:

How to unify marketing insights from silos of data so you have a complete picture of the customer journey.

To unify the user for marketing attribution requires a unique identifier across the discrete silos of data in order to merge them. Sometimes you have a lot of data that can be merged but possibly not 100% of it. The good news is that most analytics and data collection tools have a unique identifier. The bad news is that few tools have the same identifiers with the same data in those fields such as: emails, cookie id’s, phone numbers, addresses, identification numbers, or semi-randomly derived values. This sounds complex and you might be asking yourself, “How do I work with this in my situation?”

(Tip: You will need a field in your data to account for every system or tool’s unique identifier).

“How do I work with this in my situation?”

First, it’s helpful to start with where you want to have your central space of truth. Where do you want to go for the data? What system do you have that is equipped to contain all of the data you need? Data collection is the first step towards unifying the user and is why most marketers lean on their CRM or marketing automation tools as those systems tend to be robust enough to handle it. In my opinion I recommend a CRM (customer relationship management) tool.

Second, you need to take inventory of all the data you currently have access to. Do not focus on all the data you could have or might have… start with what you have today. You will need to inventory all of the data that touches the customer either directly or indirectly. You might want to prioritize the data that most directly interacts with the customer (where you can see the customer in your data) and sources that are easy for you to access.

Third, you need to model your data inventory to draw out some ideas for how it will all merge together. Have a whiteboard, mind mapping, or sketching software then use that to help you. To model the data you need to think of everything in terms of tables (think of excel). Rows will always correspond to your unique identifiers (typically over time). Data that is too large will make it impossible to work in software like Excel because it will crash on you. This is why you can’t just rip all of the data from everywhere – you need to begin with a model. You don’t need all the rows of data but you do need all of the columns (the metrics and dimensions) that add context to your data. With this model you can see the unique identifiers that cross silos and some that cannot.

Fourth, you will almost need to write a recipe of sorts that details what needs to be merged with what, when, and in what order. When two identifiers can match up between two tables of modeled data you will end up with a single table that includes all of the information combined. For example say you have a table of names and emails, and another table with phone numbers and emails, when you combine them you will have a table of names, emails, and phone numbers. The emails column was the unique identifier that connected the two data silos and when they matched up you will be able to have phone numbers alongside the emails. When you have data you feel that you don’t need you do not need to bring it along – but this is why we model so you can see ahead of time what work needs to be done.

How does this whole process look?

Let’s go through a hypothetical example using some free tools and how you could merge that data into a business intelligence tool (in this case we will use Google Data Studio). For our example let’s assume you have the following tools with data in them:

Google Analytics (Website data)

Email (Could be any email platform)

Digital Advertising (Could be any advertising platform)

CRM (Could be any customer relationship platform)

Step 1 – Decide what your ultimate source of truth will be.

In these tools we could have (3) possible sources of truth: Google Analytics, Email platform, or the CRM. I recommend not using Google Analytics because you are not allowed to store personally identifiable information that is not hashed or stored in a private way. Depending on how robust your Email platform is you could use it as the source of truth but typically these systems are not built with this primary function. Feel free to use anything you can to achieve this but for this example I will consider using the CRM.

Step 2 – Take inventory of your data sources to get our metrics and dimensions.

Google Drive is free if you have a gmail so for this example I’m creating a gmail and jumping into Google Sheets. From sheets (similar to Excel) I am going to use this to inventory the data and do all of the merging with the index match function galore (see this article if you want to know how I do this).

For Google Analytics you can use the free analytics addon to pull your data. I’m going to save you some time and let you know that if you use the standard setup of GA you aren’t using the User-ID view and you are not hashing or storing your personal data into custom dimensions/metrics. Lets deal with the basic setup here – in which case the unique identifier is the cookie id or a UTM parameter. The trouble is, if you aren’t capturing this into your Email platform or CRM with a form field that picks this up you will not be able to use this data to merge. You can of course see data in aggregate like X visitors came from a specific email (assuming you used UTMs) but you cannot see this by each unique customer unless your systems are setup to account for unique user identification. By default they are not setup this way.

So what now? Your Google Analytics standard setup is valuable for website analytics but does not come out of the box with a unique user identification system. You have to do some work to set it up that way. I wanted to include this mention but if you have setup your analytics with a way to match the user to a specific ID that is stored in your CRM/Email provider because you capture it there as well – then power to you! In this case you can use the analytics addon to pull any metrics and dimensions (other than the identification field) to add website analytics to your model.

For Email, you have the easiest source of data yet because you at least have the email of each unique record in your system. After that any data you have corresponds to information that adds context to that email such as: first names, last names, company information, demographic information, any activities/interactions that took place, and possibly commerce related fields. Most email platforms have a way to export data but if you can only download the aggregate data (leaving out the individual emails) then you will not be able to do any trying of it’s data to other systems.

Email to landing page insights might also exist in your email platform but likely will not exist in Google Analytics (tied to a user) by default because Google Analytics removes personally identifiable information like this on purpose.

Digital Advertising data collection of the unified user is similar to email – whatever platform you are using for forms is going to have the user identified by emails. If however, you are not using an email platform like this and have your CRM attached to the forms then you will look there. Either way the only way to attribute any digital advertising activities to users is from the capture point such as a form, a chat, an email, or a phone call. The point where a user becomes known is often described by many platforms as a, “Conversion.” By itself, saying something is a conversion in marketing is not very descriptive.

The last and hopefully the easiest place to get customer data is the customer relationship management platform (aka the CRM). In order to model the data you need to work with the data it it’s tabled format. So export all of the user-related data from Google Analytics (if the unique identifier exists), Email, Digital Advertising, or CRM platforms with at least the relevant dates and the unique identifiers. All the other metrics, dimensions, and context of your data is not necessary for modeling.

Step 3 – now we will model the data you’ve exported and align their unique identifiers.

If your data is not too large you can hold the data from one table in the first Excel tab, and the data from another table in the second Excel tab. Assuming this is the case, we are going to be using Index Matching to merge the second table into the first table (or vice versa). The two data tables need to have at least 1 of the unique identifiers in common. I would start with a table that has the most data potential (like the CRM) as the first tab and merge all data into it.

(Tip: If your data is too large for Excel or Google Sheets, then you need a server (SQL-Lite) can use a desktop and provision a file for this purpose. Is it worth it? If you are working with that much data you should really consider what value improving data insights could provide).

As you work through the data modeling you need to remember (or better yet, write it down) all the steps you took to merge the data. The index data (first part of the formula) will contain the data you want to add from the second tab into the first tab. The match data (second part of the formula) will contain the data you are comparing to see if there is an exact match of the unique identifier. For example, the index data could be the names, and the match data could be comparing emails to ensure they match.

(Tip: This is where you might realize that the data in one matches the other but there are spaces or formatting issues in one table. Therefore you might need to use format cleaning in the Excel document or change the way data is collected to make sure it’s scrubbed prior.)

The cleanliness of the data you are merging is a major factor. You need to rinse and repeat the index match formula for each column of data in the second tab that you want merged into the first. At the end you will end up with a much larger first table then you started with.

(Tip: After you are done merging the tables copy and paste the values but do not paste the formulas, paste just the values so no formulas exist in the cells but just the values.)

Now that you’ve done this for two tables, let’s open a third tab (if its possible), or delete the older second tab and start another second tab (that is fresh and empty). Time to merge the next table into the first – so make sure that the tables both have at least 1 unique identifier. You will continue this for all data tables you wish to merge until you end with a final table of glory… hah.

Step 4 – the last step is to write down your process and make sure you can account for everything.

Writing down the process in this way prepares you to work with Structured Query Language (SQL) the basic language of processing data in most if not all systems. You are working in Excel because that is a great place to start but you are learning a skill that can be applied to working with large volumes of data using a SQL-based server. The merging in SQL is done using Joins just as it is using Index Match in Excel.

Assuming you got this far now it’s time to have some fun in Google Data Studio. You do need a Google Account, but it’s easy to create one (with a strong password) and then jump in there:

Kevin Dieny

Marketing Professional

Get started by saving your final data table as a (.CSV) file and then upload it as a data source into Google Data Studio.

After your data is in there create a report. You can jump into a template one designed for what you are trying to measure but usually you can open a blank template and then add charts and data visualization to play around.

I recommend that at the end of the day you watch some videos (take a training) on how to get the most out of Google Data Studio. At first some of it is daunting but it’s a really great tool and one that has amazing features. Most of what you learn you can even take with you into paid platforms.

I wish I could expand more into different data types but this can be so subjective for different businesses. At some point I will tackle the user-id in Google Analytics but it is well covered in other blogs. My experience with user-data in Google Analytics is that while it is nice to have it also means some of the elements of Google Analytics function differently.

The right way to use UTM parameters is to use them to attribute marketing activities in a universal and standardized format.

“Organization is a journey, not a destination.” – Mom’s Everywhere

If you are not using UTM’s in your marketing links… get on it! If you are, use this as a refresher to ensure that you are efficiently using UTM’s the right way. I say right way, but what I am about to share is the best practices for using them in any marketing plan.

Rule of Thumb? Always use them whenever a link points to your website (property) and the link is going to be placed somewhere not on your website (property).

I didn’t bother when I started in marketing and now I wish I had. UTM’s are not the answer to the ever present issue of marketing attribution… but they are big step forward and a requirement in some future attribution tools. Perfect solutions for marketing attribution do not (yet) exist, remember that.

UTM?

UTM stands for Urchin Tracking Module, Google originally purchased and made use of the Urchin Company’s tool and rebranded it. Creating a UTM is simply a matter of adding details to the end of a url link that specify marketing information. The marketing information you specify is always connected to topic-focused campaigns so you know the who,what,when,where… etc behind causing the link clicks.

Types:

There are many types of parameters that can be utilizing within UTM’s. The standard types are:

Source

Medium

Campaign

Term

Content

Stick to the five, you really do not need to go overboard with adding UTM parameters because by default Google only focuses on the standard five. Non-standard parameters or custom UTM’s, were used only when additional tracking tools exist to pull those fields and match them up with specific fields in a CRM or Database.

Javascript, PHP, and other codes can pull the parameter data from a link URL and insert that information into form fields. This is a much more complex way to use UTM’s but if you are looking for a way to pass url parameters into form fields, to make use of complex cross-domain tracking… then you will need to push that data into forms that is stored into CRMs.

In addition you can try to achieve a form of first/last click attribution by passing UTM parameters into a cookie (I will discuss this in an advanced UTM tutorial).

Builder:

You should not be using UTMs on every single link. As mentioned in the rule of thumb, you only need to use UTMs on links to direct to your webpages. You also should only use them where the link where be posted is not on your website. Just in channels you do not own.

This builder allows you to add marketing details to a link in a quick and easy way. Google’s tool makes it easy to copy/paste, and even gives you the ability to short code your URL (but I do not recommend that unless you absolutely must). Pop open the builder and add only the information that’s relevant, more is not always better.

Avoid these mistakes:

NO CAPITALIZATION.

Repeating a word or phrase in more then one UTM parameter.

Use date only once.

No spaces at all, ever, anywhere between words.

Make it readable so use ‘-‘ between words.

KISS – keep it simple stupid.

Cross-functional understanding and ability to understand.

Naming:

Whether it’s just you or a whole team of marketers, you need to keep track of your naming conventions. UTM normalization is the easieset way to have adoption and understanding across teams. An easy way to track past conventions is to use an excel or shared Google sheet and paste the links you use into it. This way you can refer back to what you used before. See my example above.

Always use the same exact (case sensitive) name. If it’s “google” use “google” and not “google.com”. How you name them matters. This needs to be known and available to everyone who will be creating links. You need a naming convention, a standard, and it should be easy to access.

Kevin Dieny

Marketing Professional

In review, use UTM parameters in your marketing activities. If you are not, spend some time to set up a shared excel or Google sheet and come up with conventions. The earlier you make this change the better. Get everyone on board; this will make marketing attribution much more streamlined and save you tons of time and money down the road. You will thank yourself later when questions about attribution come up and you want more budget.

Marketing Saturation is how exposed a consumer is to your messaging and likely to ignore it.

“Stand out, without sticking out.” – Friend

The important concept behind marketing saturation is entropy. Entropy is the phenomenon of natural decay or a decline over time. Just like in marketing, all of the business activities will slowly perform worse and worse, on average. Usually saturation follows a normal distribution; once it hits its peak it undergoes an entropy and will constantly decline. The more frequently someone is hit with the same message, after a point, it will preform worse and worse.

Gaussian graph of a normal distribution with standard deviations

Think of saturation as the great unifying force within marketing that forces new and inventive methods to reach new customers. Everyone deals with saturation including your competition, it’s a healthy and good phenomenon. Advertisements are often measuring this with the frequency metric. Frequencies above 2 indicate that ‘on average’ someone has seen your message at least twice, and so on.

The obstacle you will inevitably face is what to do about marketing saturation??? My answer will always be the same… innovate! Innovation is the cure, creating new and interesting content, that’s how you continue to thrive and do it better than your competition. You can always add new reach and a new audience to the message but at some point there will be a finite audience to show your message to.

How do you innovate?

There are plenty of articles online to read to discover how to make something new. You can start by asking your team, your customers, or looking at your competitors for new ideas. If your biggest problem is coming up with new messaging then you should realize… that’s amazing! Saturation for some industries is more difficult to overcome because there are only a small number of quality customers and everyone is fighting for them. As competition increases – saturation is a very real and tough issue to solve. You are trying to stand out in a big pond.

Cultivating creativity:

Not to steal from the many TedTalks here, but fostering creativity is a lifestyle and work environment dilemma. Risk is the natural enemy, it does not want you to innovate, and risk does not want you to spend money without guarantees. You bring out creativity by allowing for risk: learn from failure, fail fast and take notes, then reward your successes and keep going.

TedTalk video on increasing creativity and innovation.

Kevin Dieny

Marketing Professional

You want to change your message enough each time that all your hard work does not fall prey to saturation’s entropic affects. You need each marketing message to have enough of a difference to be unique but not altogether off-key. You should build out a calendar of how often you will develop new content and messaging so you can plan ahead.

If you are hitting the wall and struggling with recapturing the success of the past it’s likely that you need a new message. Working hard in the modern workplace is all about being efficient with your time and energy in a way that allows for creativity and innovation without too much added risk.

Marketing position and perceptual mapping is a valuable tool used in research to visually represent the comparative metrics and dimensions of products, brands, and services.

“When you throw dirt, you lose ground.” – Texas Saying

One of the issues that crops up in every organization is when everyone has different priorities for tasks and views the weights attached to those tasks differently. Unity is when everyone has a somewhat uniform perspective within the company at any given time. Meetings are held to unify us, special events, group activities, projects, inter-department meet ups, it’s all been strategically created with the goal of unifying everyone.

Ultimately everyone works together but you want all of your resources working cohesively and cooperatively to be as efficient as possible. Within this complex chain of efficiency lies position and perceptual mapping. Internally, these tools are used to help unify an organization and realign the goals and priorities so everyone is helping and maximizing their efforts.

Externally, this tool is utilized by marketing research to inform the company of how certain customers view specific metrics. The best example of this is asking customers what brand offers the best, “bang for your buck.” This simple comparison we do in our everyday lives compares the cost metric to the value dimension.

The image above is the standard design for any position and perceptual mapping. I will walk you through how to set this up and conduct your own mapping.

Step 1:

You have a question that if you knew the answer to, you could make your company, brand, product, or service more focused, and therefore create more value. The question needs to have a metric and a dimension; two key performance areas that make up the question.

“Compared to our competitors who has the most reliable product for the price?”

Step 2:

A representative audience and a limited-bias format of delivering that question and deriving answers should be used. The audience should be relevant to the population of clients, customers, whatever group the question pertains to.

The question should be non-partisan, should not lead them, and be conducted professionally. You want to extract quantifiable data not qualitative data, so use scales or assigned values to represent their answer choices. You can also compare each element to each other and create an ordered list which can be turned into a simple scale.

“On a scale of 1 to 10, 10 being best and 1 worst, which of these five companies are reliable?”

Step 3:

Results should be tallied, statistical review completed, and all relevant data should be there with the presentation of results. The scaling should match what we will see on the map. The answers will produce a value for the metric, and a value for the dimension (X-value, and Y-Value) that you will essentially graph.

“Company Z, has 8 for reliability, but only 4 for price (8, 4).”

Step 4:

Present and evaluate results (think scientific method). There are a few considerations when it comes to mapping. The position and perceptual map is a limited view based on the people interviewed, it represents how people see the elements when compared with only that dimension and metric, and only shows you how something is currently viewed. Maps can become outdated quickly, everyone is competing and trying to interpret trends and predict the future, so anticipating shifts means redoing the map constantly.

“Right now, Company T is viewed as the price leader to our audience with a 10, while Company J is the reliability leader with a 9.”

The perceptual/positioning map is just that, a perspective, and a view of how things are in the mind of those asked. These are not rigid maps to buried treasure or gold for your company. In fact they may often be skewed and contain any amount of error because the measurement is not perfect.

These maps should be used to match and alleviate position imbalances, help you plot goals, and try to stay relevant. You can identify characteristics you may not have considered – representing fresh opportunities and market share you can conquer.

Kevin Dieny

Marketing Professional

Marketing positioning and perceptual mapping is scientific but is limited in scope. Map what truly matters by starting with a hypothesis and test for it with professional research. We all know how hard it is to unify, it’s wishful and hopeful, but something must be done to focus and streamline goals and priorities.