A good question from a reader today who asks, “My client claims you can't have a From: or Reply-To: domain that's exactly the same as your click tracking domain (what Marketo calls the branding domain). Is this true?”

New users may search in vain for a log of all Filled Out Form events plus their point-in-time form data. (That is, even if field updates were blocked or values were later overwritten, you see the values that were on the form when each person clicked Submit).

First: it's not correct to say that Marketo doesn't store the original data from form posts: in fact, the REST Activity API includes historical POST payloads, including since-overwritten values. But it's true that it's not easy for a non-developer to see form data history across leads.

When seeking such a report outside of Marketo, users tend to say a “Google Sheet” (as opposed to XLS/CSV, Excel 365 online workbook, or something else). I'm not a Sheets user myself unless forced, but I know how it's done, and it's reeeeeally easy. So that's what we'll use today.

Marketo's segmentation feature is a fantastic way to easily carve out your database into specific audience segments to deliver dynamic content. Rather than having multiple email versions to edit, approve and send through smart campaigns, dynamic content allows you to create one email with different versions for specific sections of content in it.

The drawback of using dynamic content comes into play with reporting. While a segmentation can be applied to the smart list or used as a grouping in the set up of the email performance report, this only provides a filter of the data based on the segment someone is currently assigned in the segmentation – not where the person lived when the email was delivered using the segmentation.

For example, let’s say you sent an email in March with dynamic content based on a segmentation for Job Title – Manager v Executive. At the end of March someone gets a promotion (yay!). The same report you pulled in March will show discrepancies if you pull it in April. Take a look:

Standard Email Performance:

Email Performance grouped by Segmentation:

In March

In April

The April report implies that the Executive version of the email was delivered when in fact it was the Manager version.

In most cases segmentations are created based off data that rarely changes, so this reporting discrepancy is not a big issue. But if you build segmentation rules on data that can change often – like the example above – the metrics might not be telling the right story depending on when you are analyzing the data.

Best practice to avoid this is to run any reports using a segmentation immediately and always refer to the results at the end of the quarter or year for any summary reporting. If this is not an option and there is an important requirement to know exactly which “version” of an email someone receives then you should (unfortunately) create multiple email assets. While it might seem inefficient, Marketo still provides ways to streamline the build process with tokens (folder and/or program level) and cloning capabilities so that you can quickly get emails out the door.

A pesky problem with Munchkin is that you can't selectively turn it off based on visitor characteristics.

You can choose Disable Munchkin Tracking on Marketo LPs, but that's for every visitor, and on your corporate site you're unlikely to have access to an on/off switch. So internal visitors show up in your stats (okay when testing, but bad after go-live) and it would be great to be able to exclude them, wouldn't it?

Although the best place to check IPs would of course be on the Munchkin server itself (thus no extra requests) we can ping a remote service to get the end user's IP, then use a l'il JS wrapper to conditionally load Munchkin based on whether the IP appears in allowed/disallowed Access Control Lists (ACLs).

Little-known —and therefore little-hated! —this bug applies only to Marketo’s email-specific Velocity setup, and not to the language in general.

If you mentioned it to someone who’s built webpages (as opposed to emails) using VTL, they’d rightly look at you like you’re crazy. And the necessary workaround also goes against every coding principle I’ve ever endorsed.

In other words, please don’t take any general-purpose Velocity guidance from this post, but do use the code for this specific goal.

What's the problem?

Even when you follow the prescription for generating tracked links in Velocity (namely, output fully-formed <a> tags from VTL, as opposed to outputting bare URLs and trying to plug them into <a>s in the outer email content) you still won’t be able to output multiple tracked links in some common cases.

Customers often ask me for tips for getting the most from their program performance reports. Here is a checklist you can use to help make sure you have everything set-up correctly.

A Program Performance report is great for providing a summary of your programs. The information is pulled real-time from Marketo so it is always up-to-date. Metrics include number of new names acquired by the program and the number of members who achieved success.

Before I get to the checklist, here is refresher of the different filters and metrics available in the report. The following is a webinar example report.

Program Performance Report Filters:

Channel (e.g. Roadshow, Webinar, Web)

Tags (e.g. Region, Vertical)

Period Cost (e.g. Date of incurred costs between x and y)

Programs (e.g Choose a folder containing a group of programs or a specific program

Metrics per program:

Number of members

Number of new names

Number who reached success

Metrics if you included period costs:

Cost per member

Cost per new name

Metrics if you click on show program status columns when you select a specific channel to filter on in your report:

Number of records in the program per status

Here are some tips for getting the most from a program performance report.

Establish a good folder structure grouping together first by marketing activities (aka channel) and then by year. It will make it easier to select a group of webinars to analyze within your channel

Use a good program naming convention that can be easily be sorted or filtered if you export the report to excel

Create only the channels which are necessary to reduce confusion and accidentally using the incorrect channel