In analytics circles, it is common to quote Peter Drucker: “What gets measured get managed.” By quantifying our activities, it becomes possible to measure the impact of decisions on important outcomes, and optimize processes with a view to continual improvement. With analytics, there comes a tremendous opportunity to make evidence-based decisions where before there was only anecdote.

But there is a flip side to all this. Where measurement and management go hand in hand, the measurable can easily limit the kinds of things we think of as important. Indeed, this is what we have seen in recent years around the term ‘student success.’ As institutions have gained more access to their own institutional data, they have gained tremendous insight into the factors contributing to things like graduation and retention rates. Graduation and retention rates are easy to measure, because they don’t require access to data outside of institutions, and so retention and graduation have become the de facto metrics for student success. Because colleges and universities can easily report on these things, they are also easy to incorporate into rankings of educational quality, accreditation standards, and government statistics.

But are institutional retention and graduation rates actually the best measures of student success? Or are they simply the most expedient given limitations on data collection standards? What if we had greater visibility into how students flowed into and out of institutions? What if we could reward institutions for effectively preparing their students for success at other institutions despite a failure to retain high numbers through to graduation? In many ways, limited data access between institutions has led to conceptions of student success and a system of incentives that foster competition rather than cooperation, and may in fact create obstacles to the success of non-traditional students. These are the kind of questions that have recently motivated a bipartisan group of senators to introduce a bill that would lift a ban on the federal collection of employment and graduation outcomes data.

More than 98% of US institutions provide data and have access to the National Student Clearinghouse. For years, the National Student Clearinghouse (NSC) has provided a rich source of information about the flow of students between institutions in the U.S., but colleges and universities often struggle with making this information available for easy analysis. Institutions see the greatest benefit from access to NSC data when they combine it with other institutional data sources, and especially demographic and performance information stored in their student information systems. This kind of integration is helpful, not only for understanding and mitigating barriers to enrollment and progression, but also as institutions work together to understand the kinds of data that are important to them. As argued in a recent article in Politico, external rating systems have a significant impact on setting institutional priorities and, in so doing, may have the effect of promoting systematic inequity on the basis of class and other factors. As we see at places like Georgia State University, the more data that an institution has at their disposal, and the more power it has to combine multiple data sources the more it can align its measurement practices with its own values, and do what’s best for its students.

A lot of ed tech marketers are really bad. They are probably not bad at their ‘jobs’ — they may or may not be bad at generating leads, creating well-designed sales material, creating brand visibility. But they are bad for higher education and student success.

Bad ed tech marketers are noisy. They use the same message as the ‘competition.’ They hollow out language through the use and abuse of buzz words. They praise product features as if they were innovative when everyone else is selling products that are basically the same. They take credit for the success of ‘mutant’ customers who — because they have the right people and processes in place — would have been successful regardless of their technology investments. Bad marketers make purchasing decisions complex, and they obscure the fact that no product is a magic bullet. They pretend that their tool will catalyze and align the people and processes necessary to make an impact. Bad marketers encourage institutions to think about product first, and to defer important conversations about institutional goals, priorities, values, governance, and process. Bad marketers are bad for institutions of higher education. Bad marketers are bad for students.

Good marketing in educational technology is about telling stories worth spreading. A familiar mantra. But what is a story worth spreading? It is a story that is honest, and told with the desire to make higher education better. It is NOT about selling product. I strongly ascribe to the stoic view that if you do the right thing, rewards will naturally follow. If you focus on short-term rewards, you will not be successful, especially not in the long run.

Here are three characteristics of educational technology stories worth telling:

Giving credit where credit is due – it is wrong for an educational technology company (or funder, or association, or government) to take credit for the success of an institution. Case studies should always be created with a view to accurately documenting the steps taken by an institution to see results. This story might feature a particular product as a necessary condition of success, but it should also highlight those high impact practices that could be replicated, adapted, and scaled in other contexts regardless of the technology used. It is the task of the marketer to make higher education better by acting as a servant in promoting the people and institutions that are making a real impact.

Refusing to lie with numbers – there was a time in the not-so-distant past when educational technology companies suffered from the irony of selling analytics products without any evidence of their impact. Today, those same companies suffer from another terrible irony: using bad data science to sell data products. Good data science doesn’t always result in the sexiest stories, even it it’s results are significant. It is a lazy marketer who twists the numbers to make headlines. It is the task of a good marketer to understand and communicate the significance of small victories, to popularize the insights that make data scientists excited, but that might sound trivial and obscure to the general public without the right perspective..

Expressing the possible – A good marketer should know their products, and they should know their users. They should be empathetic in appreciating the challenges facing students, instructors, and administrators and work tirelessly as a partner in change. A good marketer does not stand at the periphery. They get involved because they ARE involved. A good marketer moves beyond product features and competitive positioning, and toward the articulation of concrete and specific ways of using a technology to meet the needs of students, teachers, and administrators a constantly changing world.

Suffice it to say, good marketing is hard to do. It requires domain expertise and empathy. It is not formulaic. Good educational technology marketing involves telling authentic stories that make education better. It is about telling stories that NEED to be told.

If a marketer can’t say something IMPORTANT, they shouldn’t say anything at all.

Sometimes the most effective way of communicating the right way to do something is by highlighting the consequences of doing the opposite. It’s how sitcoms work. By creating humorous situations that highlight the consequences of breeching social norms, those same norms are reinforced.

At the 2017 Blackboard Analytics Symposium, A. Michael Berman, ‎VP for Technology & Innovation at CSU Channel Islands and Chief Innovation Officer for California State University, harnessed his inner George Costanza to deliver an ironic, hilarious, and informative talk about strategies for failing with data.

What does this self-proclaimed ‘Tony Robbins of project failure’ suggest?

Set unclear goals – setting unclear goals takes a lot of hard work and may require compromise. It’s way more democratic to let everyone set their own goals. That way, everyone can have their own criteria for success, which guarantees that whatever you do almost everyone is going to think of you as a failure.

Avoid Executive Support – Going out and getting executive support is also a lot of work. It means going to busy executives, getting time of their calendar, and speaking to them in terms they understand. It also means taking the time to listen and understand what is important to them. Why not go it alone? Sure, it’s unlikely that you will achieve very much, but it’ll be a whole lot of fun.

Emphasize the Tech – Make the project all about technology. And make sure to use as many acronyms as possible. Larger outcomes don’t matter. They are not your problem. Focus on what you do best: processing the data and making sure it flows through your institution’s systems.

Minimize Communication – Why even bother to make people’s eyes glaze over when talking about technology when you can avoid talking to anyone at all? Instead of having a poor communication strategy, it’s better to have no communication strategy at all. You’ll save the time and inconvenience of dealing with people questioning what you do, because they won’t know what you’re doing.

Don’tCelebrate Success – If you have done everything to fail, but still succeed despite yourself, it’s very important not to celebrate. Why bother having a party when people are already getting paid? Why take time out of the work day to reward people for doing their jobs? Isn’t it smarter to just tell everyone to get back to work? Seems like a far more efficient use of institutional resources.

Speaking from personal experience, Michael Berman insists that following these five strategies will virtually guarantee that you drive your data project into the ground. If failing isn’t your thing, and you’d rather succeed in your analytics projects, do the opposite of these five things and you should be just fine.

In response to the 2017 NMC Horizon report, Mike Sharkey recently observed that analytics had disappeared from the educational technology landscape. After being on the horizon for many years, it seems to have vanished from the report without pomp or lamentation.

For those of us tracking the state of analytics according to the New Media Consortium, we have eagerly awaited analytics’ arrival. In 2011, the time to wide-scale adoption was expected to be four to five years. In 2016, time to adoption was a year or less. In 2017, I would have expected one of two things from the Horizon Report: either (a) great celebration as the age of analytics had finally arrived, or (b) acknowledgment that analytics had not arrived on time.

But we saw neither.

Upon first inspection, analytics seems to have vanished into thin air. But, as Sharkey observes, this was not actually the case. Instead, analytics’ absence from the report was itself a kind of acknowledgement that analytics is not actually ‘a thing’ that can be bought and sold. It is not something that can be ‘adopted.’ Instead, analytics is simply an approach that can be taken in response to particular institutional problems. In other words, to call out analytics as ‘a thing,’ is to establish a solution in search of a problem, as if ‘not having analytics’ was a problem itself that needed to be solved. Analytics never arrived because it was never on its way. The absence of analytics from the horizon report, then, points to the fact that we now understand analytics far better than we did in 2011. If we knew then what we know now, analytics would not have been featured in the horizon report in the first place. We would have put understanding ahead of tools, and bypassed the kind of hype out of which we are only now beginning to emerge.

I agree with Mike. But I want to go a step further. I have always been fascinated by ontologies, and the ways in which the assumptions we make about ‘thingness’ affect our behavior. I have a book in press about the emergence of the modern conception of society. I have written about love (Is it a thing? Is it an activity? Is it a relation? Is it something else?). And I have written about dirt. Mike’s post has served as a catalyst for the convergence of some of my thinking about analytics and ‘thingness.’

Analytics is not a thing. I can produce a dashboard, but I can’t point to that dashboard and say “there is analytics.” There is a important sense in which analytics involves the rhetorical act of translating information in such a way as to render it meaningful. In this, a dashboard only becomes ‘analytics’ when embedded within the act of meaning-making. That’s why a lot of ‘analytics’ products are so terrible. They assume that analytics is the same as data science with a visualization layer. They don’t acknowledge that analytics only happens when someone ‘makes sense’ out of what is presented.

Analytics is like language. Just like language is not the same as what is represented in the dictionary, analytics is not the same as what is represented in charts and graphs. Sure, words and visualizations are important vehicles for meaning. But just as language goes beyond words (or may not involve words at all), so too does analytics.

It is a mistake to confuse analytics with data science. An it is a mistake to confuse it with visualization. If analytics is about meaning-making, then we are working toward a functional definition rather than a structural one. This shift away from structure to function opens up some really exciting possibilities. For example, SAS is doing some incredible work on the sonic representation of data.

As soon as we begin to think analytics beyond ‘thingness,’ and adopt a more functional definition, its contours dissolve really quickly. If what we are talking about is a rhetorical activity according to which data is rendered meaningful, then we are no longer talking about visualization. We are talking about representation. In a recent talk, I suggested that, to the extend that analytics is detached from a particular mode of representation, and what we are talking about is intentional meaning-making — meaning making intended to solve a particular problem — then a conversation can easily become ‘analytics.’

So analytics is not a ‘thing.’ It is not something that we can point to. Is it an activity? Do we ‘do analytics’? No, analytics isn’t an activity either. Why? Because it is communicative, and so requires the complicity of at least one other. Analytics is not something that we do. It is something we do together. But it is not something that we do together in the same way that we might build a robot together, or watch television together, where what we are talking about is the aggregation of activities. What we are engaged in is something more akin to communication, or love.

Analytics is not a thing. Analytics is not an activity. Analytics is a relation.