In the list of perennial ‘controversies’ at the intersection of teaching and technology, the lowly laptop computer has always played something of an outsized role. I’m old enough to remember a time when the laptop’s extreme portability was breathlessly heralded as something that would revolutionize how and where learning would take place. (“It only weighs eight pounds; ten if you include the charger! Now students can read, conduct research, or write papers anywhere and everywhere! The era of ubiquitous learning has arrived!”) I also remember some of the dire predictions that were lobbed back in response. (“Students will be endlessly distracted! They will use their computers as intellectual crutches instead of learning to think and do for themselves! The end of deep, focused learning has arrived! Besides, what’s wrong with going to the computer lab — or using a typewriter, for that matter?! “)

Here’s a fun way to spend a few minutes, assuming you’re the kind of person who enjoys looking at things like course learning objectives. (Is there anyone who doesn’t?)

This is a word cloud representation of the current course learning objectives for most of EvCC’s courses. This is generated using Voyant Tools, a online text analysis platform that can do all sorts of neat and sophisticated things with large quantities of text.

By default, the word cloud displays the most common words appearing in the collected course learning outcomes across all departments and divisions. You can move the Terms slider to display fewer or more words. If you’d like to look at the outcomes for a single course, click the Scale button, select the “Documents” option, and then choose the specific course you’re interested in.

I find this visualization interesting to think about in relation to Bloom’s Taxonomy of Educational Objectives (a nice web version can be found here). By removing a lot of the domain- and subject-specific words that often appear in learning objectives, the word cloud view illuminates some of the broader categories of learning our courses identify as essential to a student’s progress through a given course and program of study. Looking at these categories in terms of their position along Bloom’s spectrum of lower-to-higher-order thinking strikes me as productive and potentially revealing exercise: what should we make of the prominence of words like “demonstrate,” “describe,” and “identify” and the diminutive size of “analyze” and “create”?

At a recent conference on departmental support of evidence-based teaching practices in Biology (PULSE), I picked up two Metacognition techniques to bring into my classrooms. These seemed so powerful and, honestly, easy to implement, that I did it the following week.

This first idea stems from work that Ricky Dooley (new colleague in Biology) developed with Scott Freeman and others at the University of Washington. In my majors’ Biology class, I have weekly quizzes over the past week’s material. Standard in-class quizzes, mostly multiple choice (taken with iClickers) with a short answer question here and there. Student performance was mixed, and when we went over the correct answers, many students had “ah-ha” moments when ideas began to click.

Of course, these ah-ha moments were a few moments too late to help on that particular quiz. What I’ve begun doing is flipping that around. First off, I’ve moved this quiz completely onto Canvas. And rather than the usual 10 questions/10 points, they are now 20 questions, still worth 10 points. The first question is the usual question I would ask (although I’ve added more short-answer questions, reflecting questions I will ask on the exams.). This first question (and all of the odd-numbered questions) are worth zero points, so there’s no risk to the student to do their best from their memory (no reason to cheat). The second question (all of the even-numbered questions) is the same question, followed by how I would answer the question. This question then asks the student if they think they got it right, wrong, or somewhere in between. If they didn’t get it right, I ask them 1) explain why they got it wrong, 2) what the right answer is, and 3) why is the right answer correct. This question is worth 1 point, and I grade it based upon how they’ve reflected on their work. Sometimes, within their summary explanations, students will still not fully understand the material. Here, it’s very easy for me to jump in (while grading) and help them individually. An additional benefit is that these quizzes, with the addition of more short-answer questions, more closely resembles the question types I have on my midterms.

The first time I did this (in the 5th week of this quarter), my last question asked the students their opinion on this new style of testing. With the exception of the one student who was already doing exceptionally well, feedback was very positive. They appreciated the ability to correct themselves, and feel that they better understand the material. Their explanations seemed genuine to me, so I’m hopeful that they’ll perform better on our midterms.

The second idea I implemented I borrowed from another biology colleague, Hillary Kemp. This I’ve done with my non-majors Cellular Biology course, one that is typically tough for many students, as they begin their path towards an allied health degree. Exam performance on my short-answer questions is always spotty (lots of higher-order Bloom’s Taxonomy questions). Usually I would go over the correct answer with the class, in the hopes that they’d do better on the final. Now, rather than go over those answers, I give them their marked-up short-answer sections back, and let them correct their answers for partial credit. I stress that in their corrections I’m looking for them to explain why they got it wrong, and why the correct answer is correct. This is worth just enough to eliminate the need to curve the exam (essentially, they’re working to “earn” the curved points). In my large class (n=48), results were mixed. Many students clearly explained why they got it wrong and understand why the correct answer is correct. However, others just put down correct answers or, worse, Googled the answer and put down technically correct answers, well above the level of our course. Again, I awarded points based upon their explanations rather than the correctness of their answers. I think this exam reflection is helping those students who genuinely want to do well in class, as opposed to those who are maybe not too sure about this degree path. I’m hopeful that performance on our comprehensive final will show improvement because of this reflection exercise.

This post was generously contributed by Jeff Fennell, who teaches in the Biology department at Everett Community College.

As our college has been gearing up for its accreditation site visit, which happens next week, I’ve been thinking quite a bit about our seven Core Learning Outcomes (CLOs). Naturally, they play a fairly large role in the self-evaluation report that we’re submitting to the accreditors, so that’s one reason they’ve been on my mind. But I’ve also been thinking about connections among those college-wide outcomes, and how those connections inform in various ways EvCC’s ongoing Guided Pathways work.

Noodling about on that topic recently, I found myself wanting to visualize how many CLO connections there actually among the courses we offer. In particular, I wondered whether there might be certain clusters of courses that all support the same CLOs, even if the courses themselves are part of different departments, programs, or degree/certificate pathways. Here’s the visualization I came up with:

To get a sense of how courses in different divisions are connected to one another via shared CLOs, click on the name of a division you’re interested in. This will highlight all courses within that division. You can also click and drag any of the nodes representing an individual CLO; this will lock in place wherever you release it, which can make it a bit easier to see which courses are clustered around specific outcomes. Hover your cursor over an individual course to reveal the specific CLOs it introduces. (A larger view is also available.)

Do you see any interesting patterns in the network? Are there any groupings of courses you might not expect? How might visualizations like this help us see new connections or patterns that could help us approach our Guided Pathways efforts–particularly the process of developing pathways and program maps–with fresh ideas or insight into possible points of intersection across our college’s academic divisions?

First, a shameless plug: EvCC instructor Joe Graber and I will be teaming up to offer a one-hour workshop on October 3 on using the EvCC lightboard, built by a team of engineering faculty, to create engaging and effective instructional videos. If you haven’t already done so, mark your calendar!

With videos on my mind recently, and with this being a time of the year when many faculty are creating new videos to share with their students, I thought it might be useful to address a couple of the myths, misperceptions, and generalizations about instructional videos that I encounter most frequently.

Students don’t need to see me in videos. All they need to see are my slides and the information I’m presenting. (Besides, I hate being on camera!)

How frequently in your teaching do you use simple data to help students understand an important concept or trend, or to create opportunities for students to incorporate data into their own critical thinking around a particular subject or topic? Chances are you use data of some kind fairly frequently, even in disciplines that aren’t known for being particularly data-heavy. (As an example, in literature courses I taught I would frequently talk to students about, say, trends in literacy rates during the period we were studying, or shifts in newspaper circulation and public library memberships. In other words, I would share data that helped contextualize what we were reading in contemporary social, cultural, and economic conditions.)

All too often, when we use data in classes we treat it as something that is fairly static: a printed handout, an image on a slide, or a graph we draw on the whiteboard. There’s nothing wrong with that, exactly, but I often find myself wanting to give students a better entry point into data — and, more importantly, to help students understand the story the data can help us tell. “Teaching with data” is a broad category that can mean many things, but I take as one of its fundamental components a desire to teach students how to think with data and to construct meaning from it. So I was very excited to see that the Knight Lab recently released a tool for creating simple annotated charts. It’s called Storyline, and while its features are minimal I think it has great teaching potential.

Storyline is a web-based tool, and it’s so easy to use that if you know how to make a spreadsheet you can certainly make a Storyline. At the moment, Storyline makes it possible to generate a time-series line chart (essentially, a chart that shows a data variable over time) with up to 800 data points.

Unlike a static chart, Storyline allows you to attach brief textual annotations to individual data points. Here’s what it looks like in action:

The annotations are displayed in sequential order beneath the chart. Interaction with the chart can take two forms: clicking an annotated data point (those shown as circles on the chart) or clicking an annotation bubble beneath the chart. Go ahead — give it a try in the example above. And then keep reading to find out how to create your own…

Last Thursday’s Opening Week session on “Cool things faculty are doing in the classroom,” facilitated by my colleague Peg, was great fun–and a good chance for me to find out more about some of the thoughtful and innovative work EvCC faculty are doing. I learned something from every presenter, and as a result my notebook is now brimming with new ideas for future workshops, conversations, and potential blog posts.

For now, though, I’ll mention just one of the cool things from the session: Joe Graber’s demonstration of the lightboard he and some of his EvCC engineering colleagues have constructed over the past year and are now using to create videos for their courses. What’s a lightboard, you ask? It’s essentially a transparent, edge-illuminated chalkboard you can use to create videos that show you and what you’re writing at the same time. If that’s hard to envision, take a look at this demonstration video that Joe has created to show off some of the lightboard’s uses and capabilities:

This is DIY educational technology at its best!

Joe will be hosting an informal demonstration at 2:30 p.m. on Tuesday, September 19, in Whitehorse 109 if you want to stop by to take a quick look. Later this fall, we’ll also be offering a workshop on creating videos using the lightboard, combining a discussion of best practices in planning and structuring lightboard videos with an opportunity to visit the lightboard studio and give it a try yourself.

[Update 9/20/2017 — Joe an I will be facilitating a workshop on October 3, at noon, in Whitehorse 105. We’ll discuss recommendations for creating effective videos using the lightboard, then spend some time putting it through it’s paces. Light snacks will be provided, but bring your lunch — and your curiosity! For complete details, see our schedule of upcoming workshops.]

Last week I posted briefly about exploring some simple data showing how many EvCC courses use Canvas. This time around I’m turning my attention to Panopto, our video content management platform. Extracting useful information out of Panopto is a bit harder, so I figured I’d start with something simple: the total number of video hours viewed by (anonymized) course.

I’m asked on a fairly regular basis how many courses at EvCC use the campus learning management system, Canvas, in some capacity. There are many reasons for this question–ranging from general curiosity to specific ideas the questioner may have about, say, the most effective methods for communicating with students–but until fairly recently I couldn’t provide a very reliable answer. That’s partly due to the fact that we automatically create an empty Canvas course (what we sometimes call a “shell”) for every course at the college, meaning we can’t automatically assume the existence of a course in Canvas indicates active use by the faculty member teaching that course. The difficulty in pinning down exactly how many courses use Canvas is also due, in part, to the many other purposes for which faculty, staff, and students use Canvas: clubs and student organizations; departmental or program-based groups; faculty and staff programs; and so on.

Unsatisfied with only being able to say that “many” or “the majority” of courses at the college use Canvas in some way, I set out last fall to develop a more reliable measure of Canvas use and its change, if any, over the past few years. I’m happy to say the results are in. By combining course information from our student management system with data from the Canvas API, we can quickly identify the subset of Canvas shells that correspond to courses students take for credit at the college. Then, within that subset, we look only at those courses that have been published and that have at least 3 students enrolled. (I won’t bore you with the details of why that is necessary, but in general it helps filter out a variety of unusual cases that might otherwise provide a false sense of the rate of Canvas use.)

This yields a reasonably good approximation of actual Canvas use for credit-bearing courses at EvCC:

As this chart shows, 83% of courses at the college used Canvas in the spring of 2017, up from about 68% when we first moved to Canvas in 2013.

Obviously, this doesn’t tell us anything at all about how Canvas is being used, or why, or whether it benefits students or faculty. There are other data that could help us begin to investigate all of those more nuanced and complex questions–and I hope to write about some of that here in the future–but these numbers alone doesn’t tell any of those stories. Still, it’s interesting to observe the adoption of this particular platform on our campus over time.

In a previous post, I introduced Zotero–a free, open-source research tool–and suggested exploring ways to use it in classroom activities and student assignments. Zotero has been part of many librarians’ research and instructional toolkits since its early days, so the idea that it should have a place in the classroom is by no means a new one. Instructors have also been incorporating it directly into their courses for some time, often with the explicit goal of improving students’ literacies and familiarity with individual and collaborative research practices.

One of the hardest things to do in the span of a single class is to contextualize new information so that students learn to see individual facts or concepts in relation to one another. A shared Zotero collection is one way to engage students directly in that process of contextualization, helping them develop a more realistic view of the depth and breadth of a particular field of study than is often possible in an introductory or survey course. As a bonus, it happens to be useful to you as an instructor as well, since the collection created by one cohort of students can become a resource to be used in future courses or, perhaps, to provide new examples or readings you can add to the course when revising it.

Let’s say that I’m teaching an introductory environmental science course whose purpose is to give students a broad conceptual foundation for studying both the environment and the impact of human activities on it. I’ve decided that I’m going to unify the various topics we’ll cover by focusing on a common theme that we’ll return to throughout the quarter: modern agricultural practices and the challenges of mitigating their environmental consequences while also feeding a rapidly growing human population. So while the course as a whole will include units that introduce atmospheric science, ecology, biodiversity, and so on, each unit approaches those specific topics by considering their effects on some particular aspect of agriculture, and vice versa. For example, a unit that addresses freshwater ecosystems might include a discussion of the effects of nitrogen runoff resulting from industrial agriculture.