Category: Education

This week I had the pleasure of presenting a workshop on Google Analytics at WPCampus in Buffalo, NY. One feature I demonstrated during the session was using content groupings to be able to better understand sections of your website as comparative units. After talking with some other folks, I decided that a more in depth discussion of this feature and some examples were warranted. So, here you go! Read More

Recently, I’ve talked at events on the subject of how higher education institutions can begin to think about Tag Manager as a strategic asset in their digital strategy plans. Below you’ll find a number of things, including audio/video of my talk (once it’s available), presentation slides, and links to many of the resources I mentioned. Read More

If you’re in higher ed web development, you probably saw this article making the rounds criticizing university web sites. Melonie Fullick put this together along with the feedback of other Twitter users after trying to research some information from various sites. I, too, recently had some complaints doing some research on programs at institutions and finding it infuriating at times trying to get relatively simple information. I’ve talked with a couple folks about the article as well, and thought I’d give some additional commentary. Not necessarily counterpoint, or refutations, just an additional viewpoint as someone who spent years behind that curtain. Read More

This time of year brings with it a particular discussion that I always see repeated in the various higher ed circles I still follow. That topic is the question of commencement livestreams. Not if universities should be doing them – dear no, that’s long since settled – but rather what should go with it. Should they be encouraging people to take selfies, what’s the hashtag, are they curating content from Instagram to a projector, are you using this tool, that tool, and blah, blah, blah. Some recent reading I was doing also felt poignant to this topic, and so I wanted to challenge my higher ed friends out there in webdev, marketing, advancement, et cetera to a question: Why aren’t you trying to focus on the business value of commencement streaming? Read More

“Episode thirteen starts with a pop as Michael Fienen, Senior Developer at Aquent and CTO for nuCloud, cracks into a new bottle. In this episode, we talk a lot of tech, but we talk higher ed, too. Michael used to be the one-man web team, bringing us down a long winding road of figuring out how to prioritize tasks alongside putting out fires. We also talk about the importance of face-to-face interaction, especially across departments, and how being a “translator,” or at least having one in your office, is key to collaboration. We also do a lot of tech talk, discussing UX and UI, website design fads, and skeuomorphism, a new phrase you can use to make you sound smart.”Listen to the full interview at http://highered.social/michael-fienen-the-moderately-priced-scotch/

So, last night I got off on a bit of a rant regarding the nature of web development in higher ed. There was no particular reason for it – well, that’s not totally true, I guess. Chris Coyier wrote a post on the process of CMS selection for higher ed. That’s a topic that’s near and dear to my heart given the work I did at .eduGuru and our CMS research. I’d been thinking about it some recently, and was glad to see him take the topic on. But that got me rolling on higher ed web development, sort of swinging from one issue to another. What follows is a Storify of the rant that took place on Twitter. Some posts have been reorganized for narrative consistency.

Sit down for a bit and read, hopefully enjoy, and feel free to share your thoughts back either on Twitter or below in comments. It’ll take a bit, as there’s a lot here. I’ve tried to also include various replies and feedback that were also shared to build on the discussion (and will continue to do so as comments are made in the short term).

If, for any reason, folks feel motivated to use anything I’ve said in this Twitter stream elsewhere, consider all the content available under a Creative Commons. Share it, build on it, and do great work.

^ Be sure to use the “Read Next Page” button to continue ^View the story at Storify if you can’t see the ‘next page’ button, which happens sometimes apparently when the embed doesn’t size itself right.

I want to extend a sincere congratulations to the folks at Ozarks Technical Community College on their redesign. It is probably one of the single most brave things I’ve seen a college do with their homepage in quite some time, for better or worse. And that’s good, because that’s how everyone learns. Someone has to take a chance once in a while. What especially caught my attention though was that they basically did something I never really thought would be possible. Back in 2009, I wrote a bit on the principles of IA in large sites like a university. Several conversations ultimately were spun off that article, one of which involved talking about the idea of driving a university site’s navigation entirely through search.

Back then, it was little more than a pipe dream though. Random musings about a “what if” scenario. There’s so much to consider for it – and I’m not even talking about things like the political side of university sites – that as neat as the idea seemed, I never thought it could be done. And while I applaud OTC’s attempt, I still think the approach is not really ready – though it could be with just a little more work. Here’s why.

Majors Search Results

Probably the most important thing is SEO. If you are going to lean so heavily on search, that means your site – all of it – needs to have pristine SEO so that everything can be found and located properly. We’re talking meta data, keyword density, link text, the whole shebang. OTC is using a Google Appliance of some kind, which can afford you a lot of power (sadly, Google discontinued the Mini this year, leaving only the more expensive GSA on the product line). You can see some of that power in action if you do a search for “programs.” Note at the top, you specifically get the keymatch that they manually entered to make sure that a search for “programs” always results in the right page first thing. That’s good. Now do a search for “majors.” No keymatch this time referring the visitor to the programs page. The top matches aren’t relevant at all, as a matter of fact. That’s not to say the results are consistently bad, but in this approach, there’s just so little room for error.

PSU’s unified search

Another pain point for me here is the use of the stock results page as well. It’s bland, uninteresting, and doesn’t invite the user to explore the results. They have added additional search options above the box, but they aren’t integrated at all – each is a different landing page that isn’t necessarily search related. Lastly, they don’t seem to be taking advantage of collections, which can make a GSA or Mini so powerful in getting users into the right “bucket” of information. Collections are a way of filtering content into logical categories of some kind. For instance, you could have a “News” collection that keeps all the press releases searchable and separate from the normal search. At PSU, their search is an example of both unified search and collections (seen to the right). Things like “athletics” and “classes” are collections, while “people directory” is actually a separate system. But it all works through the single interface (though the people results do go to a different results page, so it’s not entirely unified).

Something else, and this is specific to the GSAs still, is that they don’t appear to be using OneBox modules either. That’s the perfect way, for instance, to try and pull in some of those external searches from the result page header, like departmental and contact searching. For instance, do a search for “NUR 230,” a nursing program course. Using a OneBox module, they could instantly provide course information, schedules, associated books, teachers, etc. If you want more examples, the OneBox is what gives you instant results in Google when you do things like typing in a FedEx tracking number, looking for movie times, checking the weather, and so on. That’s the trick here. If you’re going to go all in, blow it out of the water. Universities have TONS of structured data that could be presented this way, to fantastic results. Won’t someone think of the user’s clicky finger?

OTC Chancellor Hal Higdon said a review of the college’s website using Google Analytics showed that more than 80 percent of site visitors find what they look for through an outside search tool or OTC’s Google search server. Often, visitors skip the front page and go directly to the search box to quickly find the information they need.

Google Analytics Site Search Usage report

Admittedly, I know nothing about just what went into this research (and if anyone at OTC reads this, I live in Pittsburg, KS, about an hour and a half from you – let’s talk), but I would caution any school interested in this that analytics alone will absolutely not give you the full picture here. It can give you a lot of information, to be sure, but context and intent are intimately important to this particular endeavor. For instance, it’s easy to say that people may search a lot on a site because the the navigation or IA sucks – something analytics alone won’t tell you. So it would seem reasonable that going all search would avoid that problem, since search is designed to do an end-around on such things (this is, of course, assuming you aren’t considering things like nav and IA in your search logic). But maybe they search simply because your content sucks, and they’re trying to find something more informative. That’s a content problem. My point is, know your problems and know your goals. Have a plan for each, isolate your success metrics, and have a maintenance and measurement scheme ready.

And there certainly may be something to catering to users that search. A quick look deeper into the Google Analytics report sampled above (you do look at your search reports, right?) revealed some extremely interesting metrics. For instance, the average user spend 4:33 minutes on the site, as opposed to 11:48 minutes for users that searched. Users that don’t search viewed on average 2.73 page compared to 8.28 for searching users. But, what the analytics here don’t tell me is why. But hopefully, if your numbers are similar, you would want to know the answer to see if there’s something valuable there to be leverage.

There’s something else that bugs me, though. While I don’t want to nitpick, I feel the need to point some of this out.

“Start Here” navigation

In trying to mimic Google, they also used a “services header” on the homepage. That’s fine, go for it. But, I gotta admit the logo really bugs me. It just looks stuck on and clip-arty. But more than that, I am really bugged by the “Start Here” link. First off, “Start Here” isn’t at all descriptive about what to expect when I click on it. And once I did, I was confused that I was looking at a page with a careers based URI, but the content seemed to be related to academic programs. That’s just a labeling thing, but it’s a pretty major one, since it’s first chair in what little navigation they have. They also added a “more” link. While I know this is in line with mimicking Google, it smells too much like rebranded quick links. As a user, if the goal is to have me search, why would I click the “more” link rather than just type in the keyword for what I want? From the very start, you’re already inviting me to break with your intended navigation scheme, and that’s a dangerous game.

At the end of the day, I still think there’s something to this. Every university struggles desperately with IA and navigation. Awesome, global search just seems natural. The barriers that will most commonly prevent success are technology that can’t deliver, and the politics of university web maintenance. If you’re considering it, keep this stuff in mind:

Hire a full time SEO person. Period. Don’t be cheap here.

Don’t abandon navigation all together. Consider your “services” that require fast access. This requires a shift in thinking, making your homepage that of a “service provider,” rather than whatever you are now.

Spend six months on taxonomy. Card sorting. User research. Whatever people call something, make sure those keywords are mapped and accounted for

Make use of autocomplete and dynamic results (again, both things Google does). Save your users as much time as you can, and help eliminate mistakes.

Utilize tools like OneBox or similar systems to provide enhanced result data for commonly accessed, structured data.

Make sure you have a reporting system on pages. A “Was this what you were looking for?” flag people can click that will report the page and search that sent them there.

Accept the fact that you may have to take away a lot of editing rights from people to prevent pollution of your results. Two words – Quality. Control.

You might consider splitting the site into a sort of “gated” and “ungated” area, where the gated area is vetted, approved, specific info. The ungated section is everything else that no student ever cares about.

Respect the results page and how important it is

Unify your search platforms

Measure and track everything. Can you tell me the most viewed, but unclicked autocomplete keywords? Most common misspellings? Keywords most likely to result in an application? Bounce rate after a search result? And these are just some of the easy ones.

Your search needs to be smarter than your users. It should know what they want, regardless of how they ask for it. It needs to deliver, accurately, without question. It needs to adapt incessantly.

Hire a full time SEO person. Period. Don’t be cheap here.

Edinboro’s keyword autocomplete

Oh, there’s one more important thing here. I don’t care if your homepage is a Google knockoff or not, you should care about search. Almost all of my bulletpoints above hold true no matter what your web strategy entails. Edinboro University is one I credit with putting a ton of work into mapping keywords for things on their site to an autocomplete feature for their search. Their keyword system is a completely secondary system too, it’s not in a GSA or anything like that, but they unified it properly so the user’s experience is seamless. All they know is they are getting good recommendations that can save them keystrokes. But otherwise, Edinboro’s search is implemented just like any normal search, nothing else special about it. But the details, the little things, that’s what can matter the most.

Good search is like a life preserver. It can save a visit. It doesn’t matter if it’s just a tool, or your entire navigation. Bad search frustrates users and drives them away, and I don’t know anyone in that business. At the end of the day, I have no doubt OTC will continue to improve, and for a community college I have a ton of respect for the effort they’ve put forth here. I’m damn interested to see how it evolves.

Have you heard of Fiverr yet? Fiverr is a service that launched back in February of 2010 as a tool for people to sell simple goods and services for five bucks. Maybe that’s planting a tree in your honor in the rain forest, or sending a letter to a random soldier, or belching your name on video. Pretty much anything goes. It’s not a terrible idea, strictly speaking, and is a nice way for people to make a little extra money doing something they’re good at.

So, what does this have to do with higher ed, and why should you care? Well, simply, this.

Search Results on Fiverr for “edu”

It’s no secret that there are plenty of black hat SEO techniques for link farming. This is also far from the first time someone tried to leverage the .edu TLD for link relevancy (Note: it seems pi.edu has finally gone away, without much fanfare. No one misses it.). On top of it, odds are you can’t make Fiverr stop these listings. Because screw you that’s why. At least, I suspect that’d be the subtext of the answer you’d get from them.

How Does It Work?

Simple, spider services have created lists of things like blogs and wikis that have unmoderated change or comment systems. The people offering these services buy or pirate those lists. In some cases, they have tools that automatically submit to sites on the list. Then you watch the spam start coming in. Anyone that runs a WordPress site understands how much trouble spam can be. If you’ve ever wondered where it comes from and why, this is a pretty good start. In the end, the provider or their software tries to pass as a legitimate commenter and includes a link in the post text or author site (if you include the author’s link on their name) which then shows up, they get paid, and you get polluted.

This is a much less offensive and less dangerous version of account hijacking that we’ve seen in the past, where faculty, staff, or student web space hosted by the university is taken over and used as a landing page host or to drive backlinks and keywords.

What Can You Do?

Shut. Down. Everything. Okay, not really. But seriously, do review your moderation and approval processes for your blogs and wikis. Anything someone can contribute to should be reviewed to make sure you haven’t created a target. Keep some of these in mind (adapt to your environment):

Don’t ignore your sites and security settings.

Try simple steps like requiring at least a first post to be approved before users are whitelisted.

Look at third party commenting services like Disqus or Intense Debate which have tools for addressing this that are better than yours.

Many CMS’s have plugins that can provide more robust comment protection. For instanceAkismet is common for WordPress. I’ve had success with Spam Free WordPress.

Add moderation or extra steps to comments containing links.

Make sure links in comments are set to come through with rel=”nofollow” enabled.

Limit faculty and student abilities when it comes to setting up and configuring sites, blogs, wikis, etc.

Allow visitors to vote down or mark comments as spam.

Turn off commenting after a certain length of time or when a blog is discontinued but still available.

Set up a routine to audit your sites for this kind of spam ever X months.

None of these suggestions will likely work on their own. Some may or may not work at all in some cases. There’s no real silver bullet to the problem, as long as humans are willing to do the work manually for companies for $5.00. But, you can at least try to minimize your risk of exposure by making the effort for the spammers cost more than the time it’s worth. When they get through anyway, if you’re monitoring properly you should be able to delete the comment and blacklist the user or IP quickly enough that it becomes apparent you aren’t a high value target. The bottom line is to be vigilant, active, and take responsibility for the sites and services you’re offering that could be targets for these types of tools. Fiverr is far from the only way to accomplish this (see?), but what really matters is preventing the end result.

On its face, Filtrify is just another jQuery plugin that you can use for atomic control of a collection of DOM elements. Which is cool enough I suppose. But check out this example on their demo site. Now, instead of movies, imagine it’s student action photos from different programs, or some other visual representation of the program. Instead of genres and actors and directors as filters, you have schools and interests and jobs. It would leave you with an interactive program listing that invites a user in to play and explore. In this particular case, Filtrify is serving as an extension of the live filter design pattern – enabling a user to see all the available options, and then selectively removing that which isn’t relevant to them. People like toys, and they are inherently curious. Create an environment that promises an opportunity for exploration, and you’ll net some explorers.

But wait, it doesn’t have to be Filtrify per se, either – that’s just one idea. Something likefiltering blocks would work just as well. As would something you come up with entirely on your own. The trick is, you need to start rethinking the UX of the program listing (and probably a lot of other stuff on your sites, too), and really consider how your tools may be impacting prospective students’ ability to see you as the right institution for them. Jakob Nielsen pointed out how bad lists could be nearly a decade ago (see #7), yet schools seem to be married to them for lack of the desire to construct a better way. People don’t find long, unfilterable lists to be user-friendly at all. We already know that 17% of students will drop a school from their list if they can’t find what they want on your site. Even more will mark a school down if they have a bad experience. What is that risk worth?

The underlying issue here is that schools need to start putting more effort into the next step of their web design processes, and start looking at the user experience of what they are making. It’s easy and fast to slap stuff together and move on, but there is enormous value in usability testing. It’s part of the overall process that is too frequently skipped, since a webpage published is frequently seen as “good enough.” While the old fashioned linked list may be functionally adequate for the data being displayed, it’s a terrible way to encourage interaction and leave a good impression on your visitor.

Even if you didn’t want to use a library like Filtrify, you can still come at the problem of filtering content in a user friendly way by falling back on some basic principles like LATCH. LATCH is a content filtering methodology that most users are, consciously or not, readily able to adapt to. That makes it a great place to start when trying to solve the problem of helping people find what they need in any large archive of structured information.

So how could we apply LATCH to a set of link filters for our program listings? Here’s one example (and there are plenty others):

Location: This could be a physical campus location, online programs, or a more meta concept like a college or school.

Alphabetical: This pretty much goes without saying. But keep in mind your taxonomy might not be the same as the visitors. Don’t be afraid to overload topics and point them to the same overall detail page.

Time: This one can be harder, but could be length of the overall program, number of credit hours, or number of total semesters.

Category: Think generalized subject or job areas here. For instance, “teaching” will likely return a number of different specializations.

Hierarchy: You could use this to break down by schools and departments, or requirements, or to set up graduate tracks

The insane part about all this is that in many cases it would only take a little work to make fairly significant usability improvements over the current lists of programs. Something as basic as a live search filter would provide users with at least a little empowerment over the current model for many schools. Empowered users will be engaged users. And it’s much easier to get an engaged user to fill out an application. And on the other hand, if the technology you’re employing on your website doesn’t instill them with faith in you to be modern and student-centric, then they’ll move on.

Majors, minors, and programs are just one of many examples that could benefit from a little of this kind of TLC. I mention it as the focus of this post mainly because it tends to be really high in the funnel though. But how about:

Student organizations

Offices and departments

Faculty listings

Events

Courses

How many things could you improve with just a few hours work, and a little focus on the overall UX of the content you are trying to present? Which do you think your visitors would get better use out of? Are you particularly proud of your program listing page? Share it in the comments below for others to see. And if anyone actually does build a site based on Filtrify, let me know, I’d love to see how it turns out!