What inputs lead to what goals and outputs in a community like inbound.org?

In this post, I explained our problem with lack of engagement and how we scaled personal email to solicit answers from our members to questions in the inbound.org community.

Whilst we saw quick gains there, and it proved that Q&A could be a content format that would allow us to keep our members engaged and grow, we also hit a ceiling on growth and engagement. It was a tactic not a strategy.

We needed a cohesive strategy to help us grow. And that strategy ties closely to content in our community - particularly the numbers around that content.

This post is about how we quantified our content strategy for the community here at inbound.org, and built a predictive growth model to guide that.

The Power of Cumulative Goal Lines

One of the tools used at HubSpot is cumulative goal charts with current performance plotted against a linear goal line. At a glance, you have a measure of the health of a team - are they above or below their goal line?

Pair this with a goal that increases month-over-month. This masks massive growth goals. An 80% year-on-year growth is possible with just an incremental 5% improvement each month. That’s just doing the same you did last month, plus just a little bit more. It feels achievable! That said, “same old, same old” quickly can’t keep up. You’re forced in innovate to make that next stretch and buy breathing room for the next period.

With a goal line in place, we were forced to innovate in order to keep up. For us on the community team, the number of unique weekly contributors was chosen. More contributors means more content and weekly active users (WAU is our top level metric at inbound.org at the moment). Here’s the graph looks like in context.

How can we grow this community to 1,500 weekly contributors?

Realising that “send lots of email” isn’t a strategy was the first step. All personal emails do (when done right) is bring the right people to the right place at the right time. In searching for a proper community content strategy, we broke this bigger question down into four separate questions:

Numbers - How many discussions, articles and posts do we need?

Types - What types of content drive outsized engagement? (i.e. can we short circuit it by showing on more engaging content?)

Who - Which members should we send what content to?

What - What topics are trending and worth hosting a discussion around right now?

These each turned into big sets of analysis, so to keep the post reasonable length I’ve divided them into separate posts - the first of which tackles numbers; how many discussions, articles and posts do we need to get to 1,500 unique weekly contributors?

Finding the number of discussions

Community content drives engagement in two ways - members can both contribute content and react to other members content. Sangeet Choudary has a model for this which illustrates “co-creation” within online platforms. For us, increasing the number of contributions (discussions) means we can the environment of content to read and react to.

(Marketplace version of the Unit of Interaction model - adapted from Choudary (platformed.info)

This seems an obvious insight - but that’s not the objective of the model. Can we take this insight and quantify it?

By quantifying this goal, we’ll have a sense of comparison, prediction, be able to build a growth model and design a realistic strategy with our limited resources to reach our goal of 1,500 unique weekly contributors.

First thing is looking for a relationship - a correlation - between the number of discussions and the number of contributors. Does increasing the number of discussions increase the number of contributors?

By plotting a scattergraph, we can see where the data points fall and look for a correlation. And there was! Oh mathematics, you can be so beautiful sometimes :)

You can interpret this as 242 people contributing to the community a week without any questions on the site - other articles and posts (anything besides discussions). Then, for every discussion that gets posted, we should see an another 2.7 contributors each week (Note: this model only holds strictly within the data range of 69 to 151 discussions where we have the data points).

Within the given range, we can say that the model explains 77% of the variability (which is high for a model like this) is explained by this equation. (Read more about interpreting R2 values )

We can also look at the average number each week of unique weekly contributors per discussion. This is a much simpler model which doesn’t factor in contributors who haven’t interacted with discussions. It averages 4.7 more unique contributors for every discussion, with a standard deviation of 0.74.

Both of these are relatively strong models within the data range. There aren’t many outliers within either model - it should be useful!

What about articles? Could we still grow with them?

Articles have always been a staple part of the community and what gets shared on inbound.org, but could article sharing be a source of growth for us?

By running similar analysis on articles, we can see the strong correlation between the number of articles shared and the number of unique contributors (excluding shares of articles).

You can interpret this as 70 people coming to the site every week regardless of how many articles are shared, and 7 more unique contributors for every 10 articles shared. The R-squared is very high too at 0.836 (84% of the variation is explained by this equation). This was for between 410 and 641 articles per week.

Under the simpler averages model (which ignores any other reason why members might come to the site), we see 8 unique contributors for every 10 articles shared in the community.

But, these are alarming numbers. Having fewer contributors than posts means most submitters don’t comment or interact with their own post. It also makes the number of articles we’d need to reach our goals become incredibly large.

The OLS regression model says we need 5566 articles submitted every week… which is insane! That’s almost 10x the number of articles submitted already!

Also, it’s questionable there are even 6000 articles about marketing that could be submitted to inbound.org each week, let alone good posts. And that’s still forgetting the problem of 6000 article submissions all competing for attention from our members (an email digest of 6000 links??)

Articles sharing (ergo: sharing knowledge that exists outside the inbound.org community) will always be a core part of the inbound.org experience, but the key lesson here for our strategy is not to rely on articles for driving high levels of engagement and growth - at least before we can figure out how to make members behave and engage with articles as much as discussions.

How many weekly active users from our unique contributors?

With strong models relating unique weekly contributors and discussions, we can build a strategy for the community to hit our goal of 1,500 unique weekly contributors. However, our community team doesn’t exist in isolation. Our overall inbound.org team goal is 15,000 weekly active users by the end of the year.

We count our weekly active users based on any page view by a logged in member in Mixpanel. By pulling this weekly data and comparing it with our weekly unique contributors, we can see that there’s a consistent trend associating contributors and weekly active users.

This chart means a base level of 2250 members coming to the community each week anyway (regardless of the number of contributors - to browse, to use other parts of the website like the jobs board, and so on), and 4 more for each new unique contributor. (Note: this only strictly holds within the sample range of around 400-700 weekly contributors). Again, the correlation is strong with an R2 of 0.704 - 70% of the variation is explained by that equation.

Using the simpler averages model (which doesn’t account for contributors who have participated in anything else on the site) we see 7.5 more weekly active users for every extra unique weekly contributor, with a standard deviation of 0.91.

Comparing the OLS and Mean Average models

With more discussions (and articles) resulting in more contributors, and more contributors resulting in more weekly active users, we can model how many discussions and contributors we need to reach our end-of-year target of 15,000 weekly active users.

But, does increasing the number of questions have any effect on the number of contributors? Predictive models like this aren’t perfect. Whilst the correlations are strong and shows it’s a good model, we’d want to see if it holds a similarly strong correlation outside the data range (i.e. with even more questions and contributors than we’ve had before).

Is the model likely to change with more discussions?

The 2nd commenter indicates a discussion - “it takes two to tango”. By looking at the time it takes for a discussion to get to 2+ commenters, we noticed a trend showing more questions develop into a “discussion” (2+ commenters)

This was with both the number of questions and the number of contributors. More activity seems to correlate with faster activity.

Average number of hours to 2+ comments = 66 - (0.057 * number of contributors)R2 = 0.39

CHANGE THESE NUMBERSAverage number of hours to 2+ comments = XXX - (XXX * number of discussions)R2 = XXX

For every extra contributor, it appears to shave off an average of 3 minutes 26 seconds (0.057 hours) the time it takes to get an answer (within the data range of 400-800 unique contributors). There’s still a positive correlation (although not as strong) to show more activity creates faster activity.

So far as our model earlier, we can probably expect discussions and contributors to be lower than predicted. We’ve some evidence to show more activity speeds up activity overall.

But, it’s still very clear from this analysis that questions are the way forward, supplemented by articles, and that we still need to substantially increase the numbers of discussions which get posted in order to meet our overall team goal.

Can we test this? (And build our case with the product team?)

A mathematical model built from our own data might be helpful for drawing an insight, but it isn’t useful on it’s own for predicting the future. Does the model hold outside the existing data range?

The test we need to run is simply to surge the number of questions beyond our existing data points - with brute force or otherwise - and see if the resulting numbers of contributors and weekly active users matched what our models would predict. The week of the 9th May, we did exactly that.

This was done all on the marketing team without product team involvement. Some of the techniques we tried:

Answering more questions posted (being less selective in questions which we didn’t run our outreach process on).

Re-posting old questions which weren’t answered (we reset the date of the submissions, and email the original poster to let them know. We’d sometimes tweak the titles and posts for clarity too)

Posting our own questions (particularly those designed for high engagement, or targeting contributors in inactive segments).

Sharing more CTAs, posts and emails prompting people to ask questions

With each of these, we ran our Q&A outreach playbook which involves clarifying and cleaning up the submissions, reaching out to relevant members, and upvoting the post and good comments. All of this wasn’t stuff we’d be able to sustain as a marketing team without forgoing other activities we work on week-to-week, since the outreach process is piecemeal. Twice the questions is twice the work.

The OLS model significantly underestimated the number of weekly active users, and the mean average model significantly overestimated the number of weekly contributors. But, the overall result we saw was in line with both models. Increasing the number of questions led to appropriately proportional increases in contributors and weekly active users.

Accordingly, our models predict the following:

OLS Model requires 1329 discussions and 3860 unique contributors

Mean Average Model requires 439 discussions and 2001 contributors.

Both models don’t factor in other variables (like contributions to articles), so the Mean Averages is likely to be an under-estimate, and the OLS Model an overestimate. By rounding these to the nearest, neatest thousand #RoundNumberSyndrome (approximately 0.6 weighting between the two models), we should target approximately 1000 weekly discussions and 3000 weekly contributors.

What the surge week test also did was strain our team considerably - our community processes creaked when we scaled this up. Central to this was the need to use personal emails to get the right people to the right questions at the right time. As piecework, increasing the number of questions we run outreach on leads to a proportional increase in workload. It’s okay for a test, but not for building a sustainable strategy for growing the community.

With this, we can design ideas to approach our product team with. By presenting the problem, the tests, some ideas for solutions and how this ties directly to our teams fundamental objectives (remember the exponential goal lines earlier?), we can see this initiative prioritised higher in our development queue.

How could we grow the number of questions asked?

This really falls on two sides - product side, and marketing side. Product side is far more scalable and relieves our marketing team for other projects, but has an extensive backlog (and with it a need to “make the case” for development time). Marketing side is limited mostly by hours available to contribute (we’re a small team).

Most of these ideas fell into five buckets for growing Q&A, and wider discussions in the community.

1. Growing general awareness of Q&A

To make all visitors and members aware of they can ask a question to the community, we can grow general awareness with CTAs across the site.

We have a submission button in our top nav bar, but we don’t really have much space for clearer, more focused calls to action about asking questions. So we’ve a sidebar call to action matching the rest of the site’s clean design.

There’s also calls to action to add on each of the discussion pages and creating welcome guides for new and inactive members that prompt members to post questions.

None of these are particularly intrusive, but raise awareness that this is something you can do.

2. Capturing intent

When our members show more explicit signals of intent, we can send more explicit messages and calls to action to post.

Searching the site show’s a similar behaviour to asking a question - by creating a big “Search or Ask a Question”. You'll see this now at the top of the site.

Also, by using events and page views around searching or asking a question to send members’ emails prompting them to post a question (similar to abandoned cart emails on shopping sites).

Potentially, we could look at offsite signals of intent and target members (and non-members) with ads and landing pages to drive.

3. Producing more ‘accessible’ content

Some types of threads do a much better job at soliciting questions than others (how we do that is a topic for a whole other post) - in particular, AMAs and clinics:

Clinics are topical threads where members can ask questions around a related subject and have answers within the thread - this concentrates activity. We’ve tried various formats of this, but in the end settled on a thread style similar to our “Ask Us Anything” threads previously. There’s two key differences here:

It manages social density. By aggregating activity in one place (and time), members helping and being helped are concentrated in one thread. This lowers the barrier to posting, increases the belief that questions will be answered, and increases engagement.

It brings questions into existing conversations - instead of starting new ones. It’s more difficult to do something new, than join in with something that exists already - especially if what already exists is very popular.

4. Turn Articles into more “Discussions”

Discussions get more clicks, views and engagement. Similarly, article previews with custom descriptions get significantly more engagement too. We actually used our own findings as data on our article submission form to encourage members to share custom previews. “Pro Tip: Write a summary! Links with custom summaries in 2015 had +34% upvotes, +57% comments and +55% clicks through.”

To encourage article sharing to function more like discussions, we need to encourage more freeform comments. Facebook, LinkedIn, Twitter and others have a free-form text field for all sharing where the links get parsed and presented separately. This encourages members to share some sort of commentary or original thought, which invites more conversation.

5. Self-governance

As communities grow, it becomes impossible for a community management team to grow and manage all these relationships. Instead, community managers’ roles pivot to helping members’ own initiatives (members starting new groups, starting threads, hosting events) which scales the impact they can have.

At inbound.org, we’re reaching the scale where there’s too much activity for a small team to manage well. We’ve also seen plenty of grassroots initiatives where members are creating their own communities, habits and patterns on inbound.org. These are opportunities for inbound.org to grow engagement significantly.

The community team’s role needs to pivot to nurturing those activities which produce more activity themselves. This will let us grow the number of contributors (and therefore weekly active users) without having to scale our own team significantly. All whilst building a stronger and tightly-knit community.

I’d love to hear your thoughts and ideas for encouraging people to post questions on inbound.org in the comments!

Our product team backlog is extensive. The insight alone that the number of questions drives growth is not enough to become a priority.

What about contributors?

The other side of the numbers in the community is the number of members who can visit and contribute to the community. We class a contributor as any member who has posted 2 or more comments, or has ever shared a new submission in the community.

Members in the site follow two paths:

Path to becoming a full member and contributor

Retention cycle (new, active, fading away, lost)

Both of these need to be considered and understood in order to growth the number of contributors and weekly active users. Whilst we’ve been lucky with member acquisition (we’ve over 165,000 at the time of writing), we’ve struggled retaining members.

Only 30% of our members complete their profiles. Of our contributors however, most of them (77%) have a completed profile. There’s likely some relationship here between profile completion and become a contributor. Whether it gives us more information to include them in outreach emails, or they’ve previously contributed and want to have a profile which showcases them better - this needs more analysis (I suspect it’s a bit of both).

Even if the relationship works the other way, populating profiles gives us more targeting info and more chances to include them in our tightly-segmented outreach emails.

We grow our pool of contributors by 400-500 members per month. This puts us on track for around 19,000 by the end of the year. To reach our target of 3000 weekly contributors, we need these to contribute (on average) once every six weeks, although that distribution is unlikely be equal.

Currently, just under 8% of our contributors actively contribute each week. We need to double this to reach our target.

We can also grow our pool of contributors through member acquisition and profile completion - members with completed profiles are 145% more likely to become contributors. There’s a tradeoff though in asking for this information early on though. New members arriving with an existing intention - “I want to comment on this”, “I want to apply for a job”, - don’t want to spend time filling out a profile. That doesn’t feel like it adds value. Instead, we try to progressively profile our members over time, whilst encouraging them to build habits of consuming and contributing content in the community.

The Growth Model

From discussions, members, contributors and more we can produce a simple model relating them all. This uses the OLS models for contributors and weekly active users.

By building this model, we’ve also an understanding of areas for optimising. There’s some clear gaps to address, notably around profile completion and retaining activity amongst our contributors to keep questions being posted.

Together, this can inform our product and marketing strategy and guide us towards our goal. We can also use this to understand where and why we do (or don’t) hit our goal, so we can continue to improve in future.

So what?

You grow a community by increasing the number of discussions. Not really rocket science? It’s a fairly obvious result.

The process though of getting there, and informing the decisions. Also, by quantifying this with models, we can model what improvements we need to make to reach our targets. Remember the power of exponential goal lines.

inbound.org isn’t like a traditional SaaS product, e-Commerce store or something with more - this may feel like overkill for you. Our community produces our content which is our product.

There’s lessons here in understanding and modelling your own growth - using regression models, creating brute force tests (scale things that don’t scale), and creating a compelling case for your product team. Perhaps most importantly and widely applicable - the need to have your data all available and accessible. Your marketing is only as good as your data.

This post got rejected by our team at inbound.org for being a little long winded - still, I wanted to publish it somewhere for my own reference. My own personal site and "Notes to Self" blog seemed a fitting place for that, together with a pithier, to-the-point takeaways post on inbound.org itself.

So please forgive the length and benign detail of this post - as with the title of this blog, it's a 'note to self' :)

Recently, we’ve been sharing insights behind the scenes of what the community team does to grow the community here at inbound.org such as our outreach process (and how we scaled up to 100,000s of personal emails) and our community growth model (how many discussions and contributors we need to hit our end of year weekly active users goal).

But how can we drive outsized engagement with less input from our own team? We wanted to figure out how to stimulate lots of activity and discussion discussion without having to scale our own inputs proportionally (i.e. hiring and managing a massive community team).

One way to do this is by figuring out what our most engaging content is - views, comments, comments - and how we can reproduce it. First place to start is our most commented threads...

What are our top commented threads?

If you’ve been around the community for any length of time, you’ve probably seen our “Ask Me Anything” threads in the community with notable marketers and brands in our industry. These typically have a lot of traffic and engagement. They’re also good for branding and new member acquisition.

But, whilst they have a lot of engagement, the engagement per person who see’s it is very low. “AMAs” are a one-to-many channel. The threads can only be answered by one person (or one team) which limits the potential number of comments back-and-forth that can occur.

Secondly, they’re difficult to multiply. AMAs take place over a few days (some days leading up to them for taking questions, and then a day with the experts answering questions). This makes it difficult to run many threads close together without them overlapping and performing worse.

AMAs are helpful and good for the community, but they’re a cherry on top and not a lasting strategy on their own to growing outsized levels of engagement in the community.

What types of threads are likely to get outsized engagement?

To answer that, we first need to define what we mean…

How can we measure “engagement” with content?

An engaged member isn’t just a reader. Contributors create content which creates value for others, so we need thread types that are more susceptible to get contribution.

You could think of this as a conversion rate of views to comments. This would exceptionally give you a number of no more than 5% (in the best ever threads).

Or you could the inverse - how many views do you need to a thread before you get a comment. For instance, Views:Comments and Views:Commenters. In English this means:

“For a given view by a relevant visitor, which threads are more likely to generate a commenter?”

i.e. Where could our content, promotion and outreach effort have the highest return?

Using the inverse 'ratio' form gives us bigger, more meaningful numbers (instead of small % variations of view-to-comment conversion rates) and also helps us calculate our outreach effort more transparently. From our personal email outreach process (we published how we scaled this to 100,000s of members, together with our numbers here), we know that we average about a 10% click rate (“open rate” x “click-through rate”) - 1 in 10 people who we email will click through.

By thinking of engagement in terms of “number of people we need to get a comment” it’s easier to run quick mental maths on the number of people we need to run outreach on.

By querying our database for all our discussions, comments, commenters and views, and then sorting them according to view:comments and view:commenters ratios, we can see a number of interesting insights.

The first of these was that there were very few AMAs (in fact, only one scored below 50:1 view:comments (2% comment conversion rate) - “We Run Marketing Agencies. Ask Us Anything”). This highlighted early how we were finding new, different sorts of threads over looking at top commented threads.

Inbound conference in India? - This thread was posted by a community member who was looking to start a conference in India. This was almost collaborative in nature with lots of activity as members tried to organise elements of an event.

Clearly, the idea of creating a big, standout event “like INBOUND” resonated with our Indian members. Since the views:comments ratios were so strong, I emailed this to all our members in India. We just needed to fuel the fire with eyeballs, and even re-send the thread to people who had previously commented but with an update.

Introduction threads - We’ve experimented with threads for new members to introduce themselves. This quickly introduces the concept of commenting, gives us some material to point them in a direction that might interest them on site, and triggers a cycle of notifications.

However, these threads are hard to repeat and scale. Most of these weren’t very “social” - more a formality of “welcome to inbound.org” than a more authentic, welcoming, back-and-forth discussion like you might have in-person. Part of the problem here is those that comment on the thread have little in common with other members besides their cohort. To make this work in future, we’d need to build thread types by parts of members’ identities they share, like their location or skill.

Private Discussion - The most engaging thread however was in a private group. It discussed a major change for that group (which had financial consequences for all those involved), presented a video explaining the change, and then invited feedback. It took 2.5 views for every comment. This is hard to reproduce - but it illustrates one key principle. Humans are self-interested!

This wasn’t enough for a proper “strategy”

All these threads expressed some element of self-interest - the pride of a great conference in your own country, opportunities from promoting your new membership in the community, and the impact of a program on your revenue…. However, these were only three threads. It’s hard to create a strategy from these three threads alone. None of these were obviously reproducible.

We still had a large dataset of discussions showing high levels of engagement (views:comments) - I wanted to classify and categorise our top 100 engaging threads, and see if there were traits in common with others that we could replicate and scale.

To do this, I wanted to boil it down into a few, simple types - this would be easier to remember, communicate internally and build processes around than having several, overlapping types.

Given the huge variation of engagement different threads, classifying based on language, topics, keywords and so on didn’t appear that useful. This ruled out using a lot of tools or database queries to do this. Instead, I manually classified the top 100 based on the original posters desired output of the thread - what sort of answer were they looking for?.

Whilst there was some overlap, this bucketed 90% of the posts into four (almost) discrete categories - “POLS”. These are thread types which are abundant (repeatable!), and highly engaging. They offer the most potential for growing the community.

A quick caveat if applying to your own content - these classifications may not apply to other communities. Inbound is B2B, and for marketers. Use these as inspiration, but recreate this process to classify and categorise your own discussions and content.

Here’s how the top 100 most engaging threads broke down.

Process - Sharing “how to”

Opinions - sharing subjective, reasoned opinion

Lists - answers could just be bullets

Storytelling - sharing own experiences

Process Sharing / “How to”

These posts solicit instructions and to do lists. They’re looking for highly actionable advice to a defined problem.

They take three forms:

How do I? (my problem)

How should they? (learning about someone else process)

How do you? (as above, but gives space for self-disclosure and sometimes bragging)

Process threads are relatively easy to run outreach on. There’s often a clear match with a skill set - we can ask members whether they’ve any experience or suggestions for a named member. “Can you help with Casey’s PPC problem?”

These threads are easy to spot… see the Process examples on the inbound.org post about this article.

Opinions

If the output is supposed to be more subjective, then it has to be treated very differently. They can take the form of “closed questions” (with reasoning and debate in the comments), polls, why something is what it is, or a thread seeking a set of value judgements.

This works especially well if members of the community can be divided and debate against each other. So long as it doesn’t get personal or nasty, a good debate is good for community - it show’s people care.

So far as outreach, people don’t respond well to being asked for their opinion on something. It often isn’t that important. Conversely, they will respond if you point them to a controversial thread and say there’s a “good debate” that you “as a {interest_group} might find interesting”. It’s also the easy to keep opinion-based threads stoked by emailing members who have seen the thread with key updates from the latest comments.

Lists

If the primary output of a comment is a list (or from the thread is a list), the conversation can take a very different form. Often this limits meaningful interaction between commenters beyond “nice idea!” or “Haven’t seen that before!”. But they’re still powerful for driving engagement, and surfacing a lot of new insights and ideas very quickly.

Lists can take many forms, including sharing recommendations, plans, self-promotional posts (like this #Brexit sale thread) and so on. Arguably AMAs are similar to lists too.

Storytelling

Everyone loves a good story. It’s a form of content we’ve been aware works well in our community and tried to pursue with our original posts. They’re often a highly entertaining form of content.

It’s not a form of content that’s actionable (it’s hard to replicate someone else’s exact experience), and there’s little incentive to share for other people - outreach has to be based around people’s desire to share their past experiences. It also helps to do large-scale outreach after seeding the thread with a few, quality answers to set a good example.

It tends to take the form of either something a person has done, or something closely tied to that person’s identity (for instance, something eventful about the city they live in).

Under “other” we also had a number of content types which drive high levels of engagement, but aren’t abundant or easy to reproduce. Amongst them were critiques and collaborations.

Critiques

Critiques, such as those in The Pit: Landing Page Critiques, invite lots of contributions. It’s always easier to be a critic than a creator - use this to your advantage with peer-to-peer content! The Pit works because there’s a tight community which gets notifications of new threads there. Outside of here though, we’ve found it hard to scale.

Perhaps this something we can release in future with our channels and personalised homepage, and some element of a “private” group. Who really enjoys being critiqued in public? In your workplace, I bet there’s certain “contributors” who have no shortage of comments, critiques and things to say about various different projects. Private channels are likely the best way we can harness the engagement and content between critics and creators.

Collaboration

As with the Indian conference thread, if we can encourage collaborative work to happen on inbound.org (perhaps tied in with critiques?), we could drive a lot of new engagement and also business opportunities amongst members in our community.

One option here is to create private groups for companies to make it easier to share and discuss the latest news and ideas on inbound.org with just their own team. It’s one thing seeing something interesting, but it’s another thing to discuss that in private with trusted individuals. Perhaps in future, we can create private channels and integrate discussion threads on inbound.org with internal email, Slack, wikis and so on.

What threads don’t do well?

Whilst we’re running these numbers, it’s easy to look at what threads don’t do well” - threads which get no comments, low views and few upvotes (though upvotes can be a poor signal of quality with voting rings).

By reversing the order of the comments/views ratio, we could look into thread types that get very low engagement and views. These are discussion types we could advise against posting, give advice on how to tweak for higher engagement, actively ignore and avoid doing outreach.

Five categories stand out

Promotional - no value or purpose besides promoting something.

Obscure - hard to understand, let alone answer

Limiting - only has value to the original poster. Few benefit from participating

Statements - the original poster leaves nothing else to be said

“Do my homework for me” - the original poster has taken little to no initiative to solve their own problem already. Laziness isn’t attractive to the community

What about Articles

By running similar analysis on our articles (sort by comments:view ratios), we can see what sorts of articles drive discussion and engagement.

With articles in particular, there’s a lot of “great post”, “thanks for posting” and agreement with what was posted. Whilst it’s great to have positivity, and it can encourage members to share more quality articles, it doesn’t lead to great comments and debate.

The more interesting comments come where members take an article, and add their own commentary on top. This is hard to prompt for (it’s weird to ask people “what do you think” out of the blue) but ends up emulating discussions too. Sharing processes, disagreeing opinions, listing extra suggestions and sharing stories and related experiences. It’s especially easy to run outreach if the title of the article is a question itself.

How does this shape our community content strategy?

Whilst reviewing threads, we’ll look for threads which show these characteristics and plug them into our our outreach process. A few case studies:

Nurturing Controversial Threads

Knowing that opinionated threads drive lots of engagement, we look for thread titles on site to promote to the top. But, by monitoring social platforms, we can use people and content outside of the community to drive more debates within inbound.org.

This thread taking a ‘controversial’ quote that was dividing opinion on Twitter from Kristina Halvorson @halvorson (who wrote the book on Content Strategy) and asking people if they agreed or disagreed. This was powerful for pulling in a large pool of people - including many influential. When Kristina herself commented (in good grace), I created a list of our contacts who viewed the (already active) thread the day before and sent them an email with an update. 69% open rate and 24% click rate (delivered and clicked through).

We need to balance nurturing threads for activity against the negative perception we built amongst those who are quoted and then accused by our members. That said, we actively look out for negative. Shaped into a process - look for negative and controversial phrases about marketing on social and use those to trigger new discussions in the community.

Panel Discussions

Topical “Ask Us Anything” threads which take little effort to start another one. Broader pool of engagement, and makes it clearer members can identify with the topic. These threads also play on the ‘availability heuristic’ since members are more likely to believe their question will be answered over posting.

Lazily, these threads are quite easy to setup on our end to. I’ve an HTML template which I can swap out the topic, add in the experts on the panel (via a Google Sheets tool I built to scrape our own site and concatenate the HTML together. Hacky I know… :p), and then share the on inbound.org. To get the first people there, I email out (with our personal email process), then segment the call to join our panel to our known contributors vs. just posting a question. This can all be done in under 2 minutes!

We’ve experimented with the formatting of these threads (here’s a failed attempt) but found that tailoring an existing discussion thread. Though separate threads for each question is better for SEO, it dilutes activity across multiple different threads. We’ve found evidence for more activity generating faster activity. A simple rule of thumb to drive engagement in a community - increase social density! Get everyone with something in common in the same place and the same time

This works so well, it drives one of our key product decisions...

Creating “Never-Ending” Discussion Threads

Currently, members can upvote comments in threads so that the most upvoted comments float to the top. This is great for surfacing a “best comment” - which makes sense for answering questions, but doesn’t for some other formats:

Rather, these types of threads would benefit from comments being in chronological order. This would mean we could continuously nurture threads like this and get perpetual, ongoing activity. Members would know to go to these threads to find new, relevant discussions, and we could nurture other members who would find them interesting.

Together with our new feed view that (some of you) might have noticed we’ve been A/B/C testing, I mocked up an idea of what the future inbound.org interface might look like. Inspired by Slack, I thought to call these never-ending threads ‘channels’. These would appear in the left hand navigation for members to tab between topics that they care about. The homepage would then be a mashup of all the channels you’re following.

Whilst we’ve successfully created clinics around topics, channels could take other forms too including:

Topics

Events

Locations

Latest Jobs

Companies

Private groups

Editorial picks & featured discussions

AMAs and other specific types of threads

Your conversations and @mentions

Perhaps inbound.org could end up looking something more like this mockup?

We can also redesign our sharing submission form to encourage freeform comment and turn article sharing into a source of discussions (which delivers more engagement, contributors and weekly active users). Everything would be a discussion thread, shared with a channel which has followers who have opted-in to that feed of content and discussion.

This is the direction that inbound.org is going - a personal feed of the latest ideas, news and people in marketing that matter most to you. Stay tuned for channels launching on inbound.org soon!

Marketing automation software I've seen isn't particularly setup for managing "sets" of complex workflows. Which leads to the first rule of complex marketing automation workflows.

Workflows do one thing. One thing only.

Complexity like if/then branching is powerful, but the workflow should be designed to do one thing to one group of people.

Doing something else? Enroll them in a separate workflow. Create a framework and naming convention that set of workflows (which is then properly documented). Test this with a sample set (it doesn't have to trigger the marketing actions like sending emails). Test it with a larger sample set. Can your marketing automation system keep up?

It's been just over a year since I published this slidedeck. It wasn't meant to be presented on stage or anything - just one document I could share on inbound.org and with friends who were interested in getting into marketing.

I've written a moderate amount on various marketing blogs before, but this was the one piece that my in-real-life, non-marketing friends would talk to me about. That was odd. That was nice... :)

I noticed today it has over 1 million views. Wow! That's enough to not just be Mum.

If someone asked you to build a company just like Uber, where would you start? Buy a car? Rent a car? Create an app? Find a driver? There’s an overwhelming number of different things you could choose to do, but where the dickens do you start designing a marketplace business?

That doesn't stop people from trying to build an "Uber for X" or an "Airbnb for Y". That's great when you see a successful company, and can conceive of something similar in another industry. But the problem with working from analogies like this is you don't really understand the underlying mechanics of what's going on.

What I really wanted to achieve with my platforms research was a model that could work with any paid exchange of any type on a platform. The idea here isn't to say "like Uber" or whatever, but explain in English what's going on.

The Platform Cycle Model is what I came up with - it links together all the component parts of an online marketplace platform, and the relationships between them. The cycle part illustrates the first problem - where do you start?

First, how do you get participation?

The circular model starts with the consumer value proposition - there has to be some benefit to exchanging for a transaction to occur. To participate, consumers have to believe they’ll be better off with the platform’s consumer value proposition.

Since online platforms don’t deliver the service themselves, they must engage another set of users - *producers* - to deliver on the platforms behalf. But, producers need their own value proposition that will leave them better off than their next best alternative in order to willingly participate and contribute their resources as inventory on the platform. Often, this next best alternative is being idle.

The technology and networks within platforms often allow producers in existing or similar markets to make better use of their latent resources, or bring new forms of inventory into the market. This new inventory (like staying in someone’s home instead of a hotel) often needs some form of behavioural tweak, involves industry-wide (re)education and is in a peer-to-peer format. Peer-to-peer seldom works at scale outside of a platform, which also means platforms are the exclusive vendors of lots of new, quirky inventory.

You've got participation. Now how do you make matches?

Consumers and producers may be willing to participate on the platform, but it means nothing without the value proposition being fulfilled. The transactions still need to be made, which requires consumers and producers to match in the marketplace.

The platform’s challenge is not only to facilitate one match, but many successive matches so that consumers and producers get lasting value from participating. This needs a large pool of participants that are compatible work together - critical mass.

Product-marketplace fit diagram

Consumers and producers have heterogenous preferences - they need different products, at different times, in different places, and so on. Critical mass takes the consumers and producers with similar enough preferences - mutual homogenous expectations - for one product. This leads to many successive matches instead of just a one-off.

A marketplace can’t function without critical mass, and neither can the consumer or producer value proposition. Therefore, in a marketplace, critical mass together with the consumer value proposition and producer value proposition make up the product-market fit, or rather product-marketplace fit.

The consumer and producer value proposition, and critical mass.

Besides sustain a market, critical mass also brings standards. Within a platform, standards have many benefits:

1. Standards bring continuity to both sides value propositions

Critical mass supports the expectation that the platform can deliver again, and so it encourages lasting participation. Without this, belief in the value proposition is lost and consumers and producers no longer willingly participate.

2. Standards bring balance in the compromises between two sides value propositions

Platforms can served many different consumers and producers by aggregating users with similar enough preferences. To build critical mass, this usually involves small compromises in the “perfect” experience for both, whilst still being good enough to satisfy each sides needs.

3. Standards with critical mass allow detecting of deviation from a standard which allows feedback and reputation systems to a chance to function.

Standards bring conformity to every exchange. With a set expectation on both sides from a large sample, behaviour can be measured which allows for ratings, reviews, rewards and punishment. Feedback systems are crucial to a marketplace platform functioning effectively, as we’ll discuss later.

To make the matches occur, consumers and producers from the pool with similar preferences need to send and accept a match.

From the critical mass of users, there are active users who are “in the moment” ready to match - someone ordering a driver to pick them up. An effective platform must have a high matching efficiency where most requests go fulfilled, and the value propositions are realised on both sides.

Maximising matching efficiency is dependent on critical mass, but also compliant behaviour from the consumer and producer involved. Surpluses or shortages of consumers or producers degrades matching efficiency and the belief in the value propositions. This decreases trust and participation. If producers are turning down consumers, or vice versa, the platform must intervene to encourage matches or delist the offender. The quality of matches precedes the quantity of matches in a marketplace.

If a platform has both a critical mass of users and a high matching efficiency, it can achieve liquidity. Liquidity ensures the platform’s marketplace has an ongoing fulfilment of demand with supply, and meets the consumer’s and producer’s value propositions to meet their next request. Also, a liquid market with free-floating pricing and quantity (from producers participation) allows efficient pricing to occur where price and quantity move to meet supply and demand - like in any market.

From matches, we have transactions…

The output from a liquid market will be many matches and many transactions.

There are three main outputs from these transactions - delivery, mutual feedback and customer payment.

1. Delivery

The producer delivers the personalised consumer experience. This reinforces the belief in the consumer value proposition, drives word of mouth communication and further participation by the consumer.

2. Mutual feedback

The consumer and producer rate each other’s performance which gets aggregated by the platform. Each user’s reputation data gets used to regulate the experience. Good and bad behaviour is promoted and punished. The idea here is to create mutual trust between consumers and producers - any consumer could trust any producer so long as the transaction was hosted on the platform. Mutual trust drives co-operation.

3. Customer payment

The consumer’s payment for the service drives revenue for the platform. This is split between the producer’s and the platform’s cut. The producer’s cut leads to their operating profit which validates the initial producer value proposition of perceived profits. This reinforces their belief in the producer value proposition and platform, driving more participation.

So in summary

The Platform cycle links how and why producers and consumers interact and transact to each other’s benefit, and the platform’s role in the process.

Those who really know me, they’ve been on the receiving end of some serious geekdom. And that thing I can geek about for no end… it’s that whole flying thing.

I commence my confession by classifying the four things I geek out about avation...

You are FLYING! The whole experience of being in a chair, in the sky, like a Greek god, being served drinks, just casually blasting across the world…

You are FLYING! The engineering side of how a wing works, jet turbines, the *huge* forces involved. Mind = blown.

There's no business like the airline business. Few things quite come close to the global, capital intensive, foreign exchange, multi-market, multi-culture, indescribably complex operation and strategy of running an airline

The puzzle of finding absurdly cheap flights. Like 60… 70… 80% off business class flights (ergo: the same budget as economy) AND THEN the same trick again in economy. Fly more OR fly in more style. MOAR!

But, people do take an interest. A much more meek and mild interest. I guess flying means travel. Travel means experiences. Experiences make life what it is. In particular, people take people find the cheap flights + optimised experience interesting.

And... I feel like I need to write this down in a way that will be useful to someone at some point.

So, here’s a quick poll for you friends and followers… what do you want me to write about? What do you need a friendly plane geek to help you out with? I’m particularly thinking of my fellow Brits, whom I've probably the most useful advice.

I can't program. But, I can join up bits and pieces without having to. If you can work IFTTT, and write a formula in Excel, know basic HTML and can copy-paste snippets from Bootstrap you can build basic products.

Note to startup founders: Any motivated non-technical founder can use this to get a simple product setup in an afternoon.

Start small: creating code snippets

The use case: I want to create repeated snippets of HTML for a website.

A proper web developer could make a database, put in a variable name and change {firstName} {lastName} according to what user ID's are requested.

But I don't know how to do that...

I can make a "database" in a Google Sheet with all the variable names as columns and records as rows. Then, I can use the concatenate function to piece together the code between {firstName} etc.

Something suitably dreadful like this...

It works! I can alter the records depending on what I want, and spit out usable HTML every time.

Takes 5 minutes to assemble.

Stepping it up a notch. Dynamic content…

So that's kind of helpful, but what if I wanted to use a tool to find something out? Beyond just fiddly copy-pasting…

I'm a plane nerd. One of my plane needing activities is finding absurdly cheap flights. The key trick (for long haul from the UK) is to fly from mainland Europe, taking advantage of the lower origin-to-destination fares (particularly during sales), absence of UK Air Passenger Duty and so on.

You can find these with ITA Matrix (think having hundreds of Kayak / Hipmunk / Skyscanner / airline search results open concurrently - awesome!) but it is still dependent on user input.

What if there was an app which knew which key destinations I want to go, and alerts me to the cheapest fares and when?

So, by creating a list of all sales offer pages by each country specific airline website, scraping destination and price tables, converting the prices into £'s using the GoogleFinance function to get the latest spot rates, then sorting the results by price, it's possible to (at a glance) get an overview of the cheapest places to depart from.

Takes about an hour. Boosh!

Full Read-Write MVPs

So far we've only really created things which create things to consume. What about creating and saving things too.

Yes, you can happily hack that together with Google Sheets and a tool called Gridspree.

Google Sheets has an API endpoint for those who can use that, but for mere mortals you can signup with your Google account to Gridspee and it will read (and write to) your Google Sheets.

Gridspree's designed for a web interface, so you get a handlebars HTML template to adjust and one line script to turn your rows and columns into something dynamic. It can only do this to the first sheet in a whole spreadsheet, but that's not the end of the world in MVP-land.

Better still, it gives you the form fields to save content back to your spreadsheet. Wonderful!

So now we can build a simple, single-user app. You could try to use this as a basic backend to bolt on actual code to (logged in, many users etc.) but that's getting into the world of real products. This is enough to test ideas with.

Quick, filthy ways to make MVPs in Google Sheets. Try it, if only to humour your proper developer friends :)

The common advice amongst folks on the internet is to BLOG. Something I sort of agreed with, but never executed. That was until a friend challenged me in-person (here's his schpiel on it) that I really began to think about how to make this into a thing. My thing. So having planned to set aside time to do it, here I find myself in the Christmas holidays with a few spare moments.

But, I fear this may have been a project without a deep intrinsic motivation and purpose. The trouble is, I've been slightly spoiled with large big audiences for things I've published previously. Though they're entirely vanity metrics, a large readership motivates more writing and creation of content. A lot of people gave a damn about what I had to say. WOW!

I figured I wouldn't write on a personal blog unless I have an audience to write for - I want *someone* to find my time and efforts to be worthwhile. Seems only natural to want to find an audience for what I want to write here.

But, there's nothing stopping that someone being me..?

There's a bunch of stuff I really should make a bigger effort to write down. I'd find it useful. It'd make it easier to share more complex ideas - "I've had a number of thoughts on this before, and post them on my blog here...".

My favourite use of this tactic is Dharmesh Shah who I work with, who writes posts around issues that really pain him, buys a domain name that's easy to remember (like SorryNoCalls.com) and drop into everyday conversation - online and offline. Maximum convenience!

So, here's a bunch of posts written by me to be read by me. You might find them useful. You might find some to be complete gibberish. And maybe some of it will be worth getting a custom domain to redirect to..!