Contoleon.comhttp://contoleon.com
Sat, 04 Mar 2017 04:03:07 +0000en-UShourly1https://wordpress.org/?v=4.8.2Contoleoncomhttps://feedburner.google.comSo whats the deal with preferences?http://feedproxy.google.com/~r/Contoleoncom/~3/Xu6GwOliTB0/
http://contoleon.com/blog/2017/03/04/so-whats-the-deal-with-preferences/#respondSat, 04 Mar 2017 04:02:49 +0000http://contoleon.com/?p=3107Continue reading →]]>Australian politics has become progressively more interesting, with the percentage of primary vote being cast for one of the two major parties steadily going down over the last three federal elections. So the upcoming Western Australian state election in March and four others coming up in 2018 will all be interesting to watch if the last federal election is anything to go by.

The Attack of the Minors

2016 Federal Election Share of Primary for parties with over 5% of the vote per electorate.

The 2016 federal election in Western Australia was not as interesting as it was in the rest of the country. There were fewer minor parties running and both One Nation and Nick Xenophon Team were not present, as they were in the eastern states. This won’t be the case in the state election, with about 16 registered parties fielding candidates, including One Nation.

2016 Federal Election Other Minor Parties

Most of these probably won’t really have much of an impact. They won’t have the same media attention as the larger political parties and generally they cater to very specific groups – unlike One Nation, which is likely to have an impact, with some polls showing One Nation projected to receive 13% of the primary vote.

One Nation and the 2016 Federal Election

2016 Federal Primary QLD

One Nation had a number of candidates in a number of electorates in Queensland at the last federal election, and they attracted a significant percentage of the primary vote. This can probably be attributed as much to the ongoing drop in interest in the major parties as the high visibility One Nation attracts from the media.

2016 Minor Parties QLD

The level of interest One Nation attracted echoes a similar phenomenon seen in the 2013 Federal Election in Queensland with the Palmer United Party, which fielded a large number of candidates within the state and received a respectable percentage of the primary vote.

2013 Federal Primary QLD

Where do the votes come from?

National Two Candidate Preference Flows

Australia uses preferential voting for almost all elections. In federal elections for the House of Representatives, the preferences are fully distributed, to produce a two party preferred (TPP) statistic for each district. Most of the time this ends up splitting the votes of minor parties between the ALP and the Coalition, a pairing of the Nationals and the Liberal Party (or the Liberal National Party if you live in Queensland), except in those rare cases when a representative from a minor party or an independent wins a seat or gets enough votes to come second.

The TPP statistics are published by the Australian Electoral Commission (AEC) and are available through the commission’s website. These statistics represent a mix of the voters’ actual preferences, how the minor parties encourage their supporters to vote via how to vote cards, and which two parties have the highest count of votes.

Green Party TPP Preference flows

In Australia the Labor Party (ALP) is seen as left-leaning and the Coalition is seen as right, though these assumptions probably don’t stand up under close examination. Consequently, it has been generally assumed that the ALP has been leaking primary votes to the Greens, as the ALP has historically been left of the Coalition and there are few alternatives on that end of the Australian political spectrum. This assumption is well-supported by the two candidate preference flows.

Greens Flow of Preferences to ALP

Year

min

max

mean

median

std

2010

43.29

89.93

77.32

78.38

6.26

2013

42.78

91.12

80.89

81.38

6.14

2016

28.51

90.28

80.12

80.70

7.32

Based on the statistics from the AEC, the Greens’ preferences flow very strongly to the ALP across the majority of electorates, with the Liberal Party only receiving a much smaller percentage of the preferences. There are very few parties where there is such a one-sided flow of two-party-preferred preferences as is seen with the Greens and the ALP. The preference flows from the Greens to the ALP also support the assumption that a lot of the voters that the Greens have gained have been lost by the ALP, or at the very least from groups who would have had the potential to become ALP voters.

So who is One Nation absorbing first preferences from of the two major parties? It is easy to assume that those who voted One Nation in the last election would put the Liberal Party ahead of the ALP on the ballot. One Nation does appear to be on the conservative side of politics and a lot of the reporting onOne Nation and preferences in the upcoming Western Australian election seems to be framed by this belief.

One Nation Party TPP Preference flows

However, this assumption is not very well supported by the data from the last federal election. The two-party-preferred preference flows from One Nation to the ALP and the Liberal Party were fairly equal in 2016, though in 2010 and 2013 there was a bias towards the Liberal Party.

One Nation Flow of Preferences to ALP

Year

min

max

mean

median

std

2010

31.08

71.86

46.74

46.71

9.47

2013

38.03

63.02

45.43

43.95

7.11

2016

45.06

63.73

50.18

48.55

5.29

One Nation Flow of Preferences to LP/LNP/LNQ

Year

min

max

mean

median

std

2010

28.14

68.92

53.35

53.29

9.96

2013

29.20

61.97

50.14

53.13

9.80

2016

36.27

54.69

49.43

51.09

5.29

Grouping the Minors

A correspondence analysis was performed to explore how the parties relate to each other using the preference flow data from the AEC website. This technique uses frequencies to determine associations within and between the rows (minor parties) and columns (major parties their preferences were allocated to) of a cross table.

The data from the 2016 federal election was processed prior to analysis and each row had the party which received the primary vote, the party that their preferences would have exhausted on, and the mean of each electorate’s total votes transferred for the pairing. Parties that received fewer than 10,000 votes overall were removed from the ‘from’ data set. The only parties used for the ‘to’ data set were those four with the most candidates across the country.

Column Contributions

To Party

Dim 1

Dim 2

Dim 3

To_ALP

14.00

41.63

8.36

To_GRN

73.75

2.35

3.13

To_LP

12.18

13.41

48.28

To_NP

0.08

42.60

40.22

A three dimension model was chosen as it explained at least 80% of the variance seen in the data. Each of the dimensions generated explains the relationships within the table based on the data from the columns (and rows) and each specific one accounts for a certain proportion of the variance. For example, the Greens (To_GRN) contribute most to the variation explained by the first dimension, with Labor (To_ALP) and the National Party (To_NP) contributing the most to the second. The National Party (To_NP) and the Liberal Party (To_LP) contribute the most to the third.

In the plots below the columns for preferences allocated to One Nation (From_ON), Katter’s Australia Party (From_KAP), Independents (From_IND), Nick Xenophon Team (From_XEN) and the Country Liberal Party (From_CLP) from the Northern Territory were suppressed. Their positions relative to the other parties are displayed, but they are not used in calculating the placement of the points.

These plots illustrate the relationship between many of the more well supported parties and where the preferences of those who voted for them ended up. Based on how those who vote for the minor parties place their preferences you can also see how they relate to each other, for example Rise Up Australia (From_RUA) and Family First (From_FFP) voters both have very similar voting patterns when it comes to the order they place the major parties, or their willingness to follow a how to vote card, which reflects where the minor party sees itself relative to the other parties.

Plot of dimension 1 and 2

On dimension one, the distance between One Nation to Labor and Liberal Party is minimal. One the second dimension One Nation is closer to the Liberal Party while on the third it is closer to Labor. When compared to the location of the Greens on both of these plots, where it remains close to Labor on all three dimensions, the location of One Nation suggests that those who were voting for One Nation in the last federal election did not have a clear preference between the two major parties. The Nick Xenophon Team occupies a similar position relative to Labor and the Liberal Party, except it does skew closer to the other parties.

Plot of dimension 1 and 3

Based on the preference data available from the AEC, the assumption that One Nation voters are being drawn predominantly from those who would otherwise vote for the more conservative of the two major parties is not supported. At least based on the data used here, it would be fair to assume that both of the major parties are probably losing first preference votes to both One Nation and the Nick Xenophon Team.

With parties like One Nation and Nick Xenophon Team likely to be fielding more candidates over time, and the electorate’s willingness to cast their vote for an alternative when one is available, the drop in the major parties’ primary vote is highly likely to continue.

Unlike software like SPSS, R does not always provide all the output or tests for a particular technique in the one place. What in SPSS is the selection of a check box in the appropriate menus in R generally means loading another package or coding a function. Fortunately there are a lot of packages available and the R community is both large and generous, making it easier to replicate the output.

SPSS Checkboxes

Regression is a very useful tool to have for most analysts, myself included, and well worth producing some kind of a workflow template based on the material covered while studying.

The code below was written for multiple linear regression with a continuous dependent variable. It is a collection of functions and packages that produce a regression model and graphs and statistics for assessing it’s validity. I have included a number of different outputs testing for outliers, influential points, normality and linearity in the data.

Currently I am studying statistics with Swinburne University of Technology and most of the work has been done with SPSS. Unfortunately I don’t have access to SPSS outside of the academic version. However I do have access to R.

R can do practically anything that SPSS can do, but it will generally need a number of packages to do so. For some processes like multi variable regression, this means that to cover the tests for assumptions, running the analysis and visualising the results can take a number of packages. Not to mention those needed to prepare the data before it can be worked with.

For Correspondence Analysis and Multiple Correspondence Analysis this was not the case. The package FactoMineR covered almost all of the functionality needed to replicate the workflow and output required.

The code for each of the workflows is included below and the R and Rmd files as well as the markdown output can be found in the following repository: Correspondence Analysis Process.

Correspondence Analysis code

Multiple Correspondence Analysis code

]]>http://contoleon.com/blog/2016/12/11/correspondence-analysis-in-r/feed/0http://contoleon.com/blog/2016/12/11/correspondence-analysis-in-r/Visualising the Question: Is it really “Two Party Preferred”?http://feedproxy.google.com/~r/Contoleoncom/~3/5jmEHX03BNU/
http://contoleon.com/blog/2016/10/17/visualising-the-question-is-it-really-two-party-preferred/#commentsMon, 17 Oct 2016 10:38:35 +0000http://contoleon.com/?p=3077Continue reading →]]>When I was watching the ABC coverage for the last Australian federal election the Liberal panelist took great delight in pointing out how little of the primary vote the Labor Party had gotten at that point. Frankly, he was right. But interestingly the Liberal Party were not looking likely to get over 50% of the primary vote either. In light of what we’ve been seeing happening in Europe and the USA with the apparent growth of anti-establishment feeling in the electorate, I was curious whether we had seen the same thing here over the last few elections

So shortly after the last election I made some graphs using Shiny (the code can be found here) and then posted a short post on LinkedIn. I was planning to return to this at a later date and add in the data from the 2016 election too.

Two party versus others

And there it is: the trend that was seen over the last few elections has certainly continued into the 2016 election. There was also one interesting thing to turn up in the 2016 election: an increase in the number of newer, smaller parties, many of which were not present in the 2013 election. In the following graph these are aggregated as ‘Other’ and include parties like Nick Xenophon Team and Katter’s Australian Party.

Parties over time.

What really does stand out when you look at the smaller parties is that while the Greens have been somewhat consistent over the last three elections, it is the growth of those in the “Others” group that really stands out.

Minor Party Growth

One interesting question that this raises is whether this trend will affect the conservative side of politics in the same way that the Greens have done to the ALP over time. The last time in recent history that we have seen this trend in politics was with the emergence of the One Nation Party, which immediately preceded Australia’s swerve to the right in regards to asylum seekers, best exemplified by the Tampa Affair. It will be interesting to see what kind of an impact the results from the 2016 federal election have on Australian politics over time and how the major parties attempt to reconnect with their constituents and address the loss of faith and disenfranchisement that this voting trend away from both of the mainstream parties represents.

As a personal project some time early last year I put a simple scraper together. I used it for monitoring a certain number of search results to document the ads. For a number of these search result pages, the advertisers included prices. With about a year’s worth of this data on hand, now is as good a time to start to have a closer look at it, and an excuse to put something together in Shiny.

The Data

The data used for this is from the ads in Google’s search result pages. This data was collected by a scraper using built with PHP and MySQL. It monitored a list of search terms and archived the html and stored the ads in the database. Archiving the pages as html turned out to be a good idea, as Google did change their mark up a few times over the course of the last year, and some data had to be extracted after the fact.

Some processing was required to create the dataset used here. The display URLs in the ads required some cleaning before they could be used as a proxy for brand and the price information needed to be extracted from the headline and the adcopy. In this exercise, price is generally defined as anything that would match to “$[0-9,]+”

The Graphs

Price distribution over time

R is a very flexible tool. As well as being a fairly robust piece of statistical analysis software, it can also be used to generate reports using packages like knitr and interactive visualisations based on Shiny. Which is why I put together a visualisation on prices over time for a small section of search queries. The code used in this can be found in the Github repository priceTimeseries.

The Shiny application opens on a scatterplot of price against time by observation. The second tab displays distribution of the prices shown per brand and the third is a boxplot of the distribution of prices by brand. The fourth tab simply being a table of summary statistics for each brand. Each tab is constrained by the criteria selected in the left panel. This app is designed to communicate changes in pricing strategy for paid search over time between different competitors in the market.

So Why?

Shiny more or less turns R into a dashboard. It makes it possible to take the kinds of analysis that can be done in R and integrates it into something you can put in front of most people in a business. The example I put together more or less just displays the data like a glorified spreadsheet, though this does not mean that more interesting analysis can’t be presented in this way.

]]>http://contoleon.com/blog/2015/09/13/prices-in-search-and-shiny/feed/0http://contoleon.com/blog/2015/09/13/prices-in-search-and-shiny/Text processing, n-grams and Paid Searchhttp://feedproxy.google.com/~r/Contoleoncom/~3/MabgtZlZp7E/
http://contoleon.com/blog/2015/08/30/text-processing-n-grams-and-paid-search/#respondSun, 30 Aug 2015 10:11:57 +0000http://contoleon.com/?p=3009Continue reading →]]>One of the coolest things about paid search advertising is matching ads to what someone is searching for. There are very few channels that will let you target intent to such precision. There are some cavaets. Depending on the match type, the keywords that are being bid on won’t always be the same as the search terms that triggered the ad. AdWords provides tools like negative keywords to help manage this and ensure that the right ads appear on the right search results, the trick here is to identify exactly what needs to be removed and what is working.

Example distribution of relative clicks to individual search terms

Most reasonably large AdWords accounts with a good range of match types should see clicks, spend and impressions by search term follow an exponential distribution very similar to the example above. There will be a small subset of search terms that account for a large amount of activity individually, and then a long tail that individually do not, but given a large enough account and a solid long tail strategy, can provide a good return. Or not. It is all about how well the data can be leveraged. The challenge with larger accounts is how to analysis this information effectively and at scale.

N-Grams and Aggregating the Tail

Text analysis is an interesting field, one which has a lot of relevance to search. Things like Google’s N-Gram viewer fall into this area. Regardless of the keyword they were matched to, search terms can share common phrase structures across the entire account, including long tail search terms. While individually many of the search terms in an account won’t have enough volume for analysis, the when aggregated the phrase parts they include can.

There are a number of different tools available for breaking down a corpus into it’s n-grams, including a few R packages like the descriptively named ‘ngram’ one. I used it for my phrasePartAnalysis code on github to take an AdWords search term report, extract both 2 and 3 word grams and prepare the data for analysis. The code was written for R, an open source statistical software package. The script itself can be slow with larger data sets as text processing seems to be fairly computationally intensive. This scriot will process an AdWords search term report to link performance metrics to n-grams. What is done with the data next is probably the most interesting part, and I have included a number of different graphs and produced output to help explore the information.

Where the Value is

At some point in the data analysis process actually looking at the data can be very useful and most stats packages make this very easy, including R. Base R has a lot of graphing tools, and the ggplot2 package is almost essential for most projects. In most cases one of the best ways to start is to look at the shape, or distribution of the values of interest.

Most of the following plots were created with a different set of data to that provided in the GitHub repository. The output from that set of example data is available through GitHub pages here: http://anthonypc.github.io/phrasePartAnalysis/. This example data I used to produce the plot below follows a similar exponential curve in volume as the raw search terms did before. In this case it is more an indication of how homogeneous the traffic the account is generating. In the below example, it is pretty clear that a very small number of phrase parts cover a lot of the activity.

Ngrams and Clicks

For this kind of search term analysis one of the most useful things to look for is how certain phrase parts perform across the account. Some combinations of words will be triggered by multiple keywords and sometimes, across different campaigns. A certain phrase that works well in one area of the account might not perform in another. Products such as domestic and international travel provide a good example of this. The phrase “to brisbane” is fine for a domestic product, but useless in an international campaign, though it can appear in both due to keyword or match type strategies.

There are a number of things that you can do with the data set produced in this initial process, and looking for phrase parts appearing across different parts of the account with different performance characteristics is one of the most valuable. In practice most of the time, this is where I find most of the value. Most of the work needed to identify these cases can be done in excel using an exported file from the script or by reviewing the tables generated as per the example R markdown output.

Graph Time

In addition to the raw numbers produced in the tables and exported CSV files graphs are very useful for getting an understanding for the data. The R script linked above has a number of simple graphs included for exploring how key values are distributed by ngrams, campaigns and labels in the data set. One important thing to keep in mind when graphing this kind of information is that not all kinds of graphs are appropriate. For example, the distribution of activity by most dimensions is not normal. Visualisations like box plots which are more useful when the data falls into something approaching a normal distribution can be a little misleading.

Clicks (log) per group

The same again when plotting a heavily skewed sample as per the above boxplot. A histogram or a kernal density graph would probably have been more useful.

logClicks to logCost

Bivariate graphs like Scatter plots are very useful for paid search data, where some of the most interesting points are bivariate or multivariate outliers. In the data used for the graph above there are a number of points with both extreme clicks and cost values.

Extreme Ngrams by cost

In the example code extreme values are labeled. This is done with an ifelse rule using a z-score of greater than 2 against cost. A more sophisticated approach may be to use a statistic like Cook’s Distance.

There are a few techniques used for testing assumptions for multivariate regression, Mahalanobis Distance and Cook’s Distance. The data used here is certainly not appropriate for regression, the two tests mentioned above can be used to identify points that do not exhibit the same relationship between Clicks and Conversions.

This is really just the start to what can be done with search term analysis. An ngram model like the one presented above has a number of weaknesses. As it is coded, it will not catch minor misspells or typos and group them together. The code does not include any multivariate analysis nor does it use a model to detect influential points within a campaign or label group. Though this is possible, and is easily supported by R.

]]>http://contoleon.com/blog/2015/08/30/text-processing-n-grams-and-paid-search/feed/0http://contoleon.com/blog/2015/08/30/text-processing-n-grams-and-paid-search/How Not to be Wrong*http://feedproxy.google.com/~r/Contoleoncom/~3/KxP-BQ4ZdCA/
http://contoleon.com/blog/2015/06/14/how-not-to-be-wrong/#respondSun, 14 Jun 2015 05:34:45 +0000http://contoleon.com/?p=2978Continue reading →]]>*(or, at least discover that you are sooner)

Effect size is a thing, so is ROI

Everyone agrees that testing stuff is important. Unfortunately just because someone in marketing can string together the phrase “A/B testing” on a slide does not mean it will be done well. And all too frequently in practice it is poorly run, poorly documented and poorly managed by people who did not have the skills required to design the experiment to address their needs or even understand what the data generated actually represents.

A while ago I quickly drafted a fairly informal document to send around the office to lay out a very basic framework for running tests within the web side of the business.

Testing needs purpose

Articulate the need for testing and define what is to be trialed to address this. Identify who the stakeholders are and inform them of the test where appropriate. Consider whose activity will be affected by this.

Testing needs focus

Identify the metrics that the proposed change should affect. Identify other factors that could possibly influence these metrics. Assess the quality of the data available and identify any issues with tracking and reporting that may compromise the validity of the test.

Testing should not needlessly duplicate previous work

Establish if similar, overlapping, tests have already been done on any of the brands. If there is existing material review previous testing across all brands that may be relevant to the proposed trial. Identify where the information addresses similar areas of interest, and how the new test will differ and reveal new information.

Testing needs to be linked to business outcomes

Outline the value of the information generated by the trial in terms of business outcomes. Address information generated from all possible outcomes. Define how the trial will relate to the business’ general strategy.

Testing works when what Success or Failure look like is known

Identify what the change being tested is supposed to improve. Define how much this metric would need to change for it to be worth the cost of implementing to create value for the business. Project how much data would be required to prove a change of this magnitude. Identify what sources of data will be relevant for this

Testing requires design

Determine the scope of the test by defining how many resources to devote to the trial and the control groups. Establish how long you require to produce a result based on the data available at the time. Identity confounding factors and establish a plan for how to control or adjust for these. Consider how to limit impact of the trial on other areas of the business.

Testing needs to be implemented

Begin running the trial and collecting information as it is generated and assess periodically. Establish a framework for assessing in-progress results, taking into account what would be considered an egregious trend, requiring an early termination of the trial. Conversely monitor for especially conclusive results allowing for the test to be terminated early.

Testing must finish

On conclusion of the test, document and share the data generated during the trial. Should the data support the implementation of the change, do so. If the data does not show an improvement or not one large enough to be worth implementing, do not. Revise the change in light of the new information, and plan another trial.

Testing must be reassessed

Plan to reassess the elements tested at a later date based on real data gathered. Assess where the information generated differs or otherwise from that acquired during the test. Determine if any differences indicate that the testing methodology used needs to be adjusted.

Because being wrong gets awkward

And the point is?

It is insanely easy to run tests on a website these days. There is a range of tools available to support execution and management, and it is practically expected in most businesses. However just because someone has access to Visual Web Optimiser or can find ‘Experiments’ in Google Analytics it does not mean they actually know what they are doing. Being able to run a testing tool is not enough. Understanding what the project is supposed to accomplish, actually doing due diligence on experiment design, setting aims, determining how much data is required, and executing this to serve an actual business outcome may require something beyond just knowing the names of the tools.

]]>http://contoleon.com/blog/2015/06/14/how-not-to-be-wrong/feed/0http://contoleon.com/blog/2015/06/14/how-not-to-be-wrong/Content Groups and Network Graphshttp://feedproxy.google.com/~r/Contoleoncom/~3/eAX-vCLSqAM/
http://contoleon.com/blog/2014/12/31/content-groups-and-network-graphs/#respondWed, 31 Dec 2014 00:19:29 +0000http://contoleon.com/?p=2957Continue reading →]]>Google Analytic’s content groups is certainly one of the tools I find interesting. Setting them up can be interesting, though using the tracking code approach combined with GTM can greatly simplify matters. There is a lot that can be written about planning content groups, executing on them and then usng the data available via Google Analytic’s API to produce meaningful analysis, and this subject is certainly worth a few blog posts. For now though, lets look at visualising the data.

Visualising networks can be very interesting and in some ways challenging. There are a lot of good examples of different approaches to this problem, including a lot of material from the d3.js gallery and any number of R packages (like qgraph) and other tools. Suffice to say, there has always been interest in connecting nodes via edges in interesting and informative ways.

Selecting the best way to explain the relationship between different things based on some form of value metric can be problematic. Especially with detailed data.

Site Sections to Site Sections

The following examples are based on content groups for another site. The dimensions used are previousContentGroup and nextContentGroup1 with a log transformed pageview total as the value metric. Unfortunately the site I based this on has not really been updated frequently and as a result has very little traffic. One of the most important things to keep in mind with this data is that it is bi-directional. The graph needs to differentiate between both kinds of connection, which rules out a few possibilities, such as the following created using a now defunct web service called impure.com.

A network of tags and posts

One method of dealing with this complexity is to use interactive visualisations, the javascript library d3.js being one of the better known examples of this approach.

Force Node Layout

Quick node graph created in R

Creating the above graph is very easy in R. The code used to create the above graph (minus API access details) can be found on github here. The edges are weighted by the volume of traffic moving from one area to another. In this example the exact number has been transformed to minimise the absolute difference between the edge wides.

The force node graph above was based on material and examples from D3.js Tips and Tricks. This graph was created based on the same data as the one above. One of the main differences between the two is the difference in how each graph deals with linking the same node. The R example can handle that by default, the d3.js example would need further work to account for this.

Chord Graph

The above graph is based on examples like the Uber Rides by Neighborhood and Chord Diagram examples. This was also created using the same data set, though in this case it was converted to a matrix rather than used as a simple CSV listing source, target and values by row. The same transformations were applied to pageviews as seen in the force node graph.

Unlike the others, this graph makes it far easier to drill down to a single group, and in the other more detailed examples linked to above, this makes it very easy to focus on the areas of interest in a more crowded visualisation.

The point

While data visualisation can be interesting, what has not yet been addressed here is what value they have, what decisions they make possible or insights they provide. The value that you can get from visualisations depends on everything that happened before. In the case of the content group examples above, it would start with why the pages were assigned to the groups they were and why those groups exist and what they are supposed to represent through to how the data is collected, collated and finally processed.

Hootsuite is a popular and flexible message management and reporting solution for social media. It is also really easy to get started with. I wrote a guest post for Search Engine People covering a few of the basics to get started with Hootsuite’s analytics tools to get the most from their social activity and how to report on what is achieved.

The most important thing, before designing your reporting strategy, is to understand who will consume these reports, what they need to see and what best explains the value of the channel. Implementing best practices like taking advantage of Ow.ly links for posts made through Hootsuit is important for getting the most out of Hootsuite’s Analytics.

Managing paid search campaigns can be very involved, and depending on the scope of product and budget available. As a result, it is important to perform regular reviews, and even more as the accounts increase in size and complexity. The larger the accounts get, the more important it is to apply a consistent methodology to this audit activity. Recently I wrote a guest post titles Quick and Dirty SEM Account Audit Basics for Search Engine People covering a few of the basics that any good audit should cover.

As a general rule, and reasonably large account should be audited at least once a quarter. Ideally account auditing should be an ongoing process undertaken along side promotional activity and account expansion. Depending on the keyword coverage and level of activity, this can be a daunting task, and can quickly turning into an overwhelming one without a structured strategic approach.