Teachers have been experiencing hundreds and hundreds of student responses on longer videos and it became obvious that we would need to make it possible to separate out participation based on groups, students, sentiments and themes, just the way we do on text documents.

We love our tick marks and the quick overview they give you, so you’ll find them in their usual place along the yellow video timeline. As before, there’s one tick mark per response at a particular timestamp in the video and the colors match the type of sentiment. Multiple responses still at the same time stamp stack on top of each other, so you can spot the points of focus by both tight clustering and the height of the bars.

Now, above the timeline, underneath the video, you will find a set of filter drop-downs corresponding to the activity on the video.

The first drop-down allows you to filter the responses by group; for example, so a teacher can see one section or period of a course they are teaching at a time. The numbers in parentheses indicates the number of responses created by that group.

Want to see just your responses, or those from a particular student? The second drop-down shows each responder, sorted by the number of responses they created which are indicated in parenthesis adjacent to each username.

The third drop-down shows the mix of sentiments used in the responses, sorted by frequency (indicated in parenthesis), and allows you to filter for them.

And the fourth drop-down shows the themes used in responses on the document, sorted by frequency indicated in parenthesis:

As you can see, much of the Ponder power you are familiar with when navigating ideas across documents are now available for minute dissections of a single video.

And don’t forget, these capabilities are all available for custom integration on your platform through the Ponder API.

The piece introduces a framework for teaching adolescents to read, but it also advocates a more thoughtful approach to the application of technology in the English classroom through an introspective exploration of what it means to be digitally literate.

“We must advocate for digital literacy, not just technology, in a way that reconceptualizes our discipline.”

A sample Ponder lesson walk-through in the piece explains how the collaborative annotation experience supports shared inquiry during research for an essay, and how Ponder’s interface speeds the teacher’s review of student activity, allowing them to better lead class discussion.

Ponder provides a place to collect and share your thoughts about your reading, but what to do when you’ve collected a lot of thoughts on a particular piece? Even tens of responses on a single page can get overwhelming, and groups of students often create hundreds, so we’ve added some tools to make it easier to navigate them.

We love our tick marks and the quick overview they give you, so you’ll find them in their usual place on the right side of the window. As before, there’s one tick mark per excerpt that elicited at least one response, and the colors match the type of sentiment. You’ll also still find each selection underlined in the page, so you’ll see and can reply to them as you’re reading.

Introducing the Ponder Sidebar

As before, clicking a tick mark or underline will scroll your window to the location of the corresponding text in the document, but it will also expand the Ponder sidebar where the new review tools live. (If you need to dismiss the sidebar, just click anywhere outside the sidebar.)

In the sidebar, you’ll see a list of all the excerpts from your groups. Similar to your feed, all the responses for a given excerpt are bundled together in a “nugget”. When the sidebar opens, the nugget for the tick mark you clicked will be highlighted. You’ll also see some summary stats and drop-downs – more on that in a moment.

Anatomy of a Nugget

The nugget shows the sentiment of the user who made the first response on that excerpt, in this case, badtz appreciates the eloquence of the statement “We are not interested in students just picking an answer, but justifying the answers.”

At the bottom, you can see that 1 other user has replied to Badtz’s comment, and then a green box with a 1 and a yellow box with a 1. Each box indicates the number of responses with each sentiment type. In this case, Badtz’s response was a green/analytical comment. Clicking on the ellipsis exposes the details of the yellow/cognitive reply.

Replying and removing responses

Mousing-over the nugget gives you the option to add your own response to this excerpt (Respond/Update), or remove it (the X).

Sorting and Filtering

But what if there are a bunch of responses? We’ve added the ability to sort and filter to make it easier to review responses. At the top of the sidebar, you now see summary metrics for the document – the total number of excerpts annotated and the number of annotations on those excerpts. Using the drop-downs at the top, you can filter those responses by group, responder, sentiment, and theme.

The first drop-down allows you to filter the responses by group; for example, so a teacher can see one section at a time. The numbers in parentheses indicates the number of responses created by that group.

Want to see just your responses, or those from a particular student? The second drop-down shows each responder, sorted by the number of responses they created which are indicated in parenthesis adjacent to each username.

The third drop-down shows the mix of sentiments used in the responses, sorted by frequency (indicated in parenthesis), and allows you to filter for them.

And the fourth drop-down shows the themes used in responses on the document, sorted by frequency indicated in parenthesis:

The filters work together and filter each other; for example, when you filter for a particular group, the other filters will only include the users, sentiments, and themes on activity for that group.

Lastly, underneath the filters is the sort drop-down.

# of Replies sorts the nuggets by the number of replies that occurred on each.

# of Themes sorts all of the excerpts by the number of themes that were tagged to each.

Controversy sorts the excerpts by the measure of disagreement based on sentiment and sentiment type usage on each.

Last Updated shows the most recently updated nuggets first.

As you can see, much of the Ponder power you are familiar with when navigating ideas across documents are now available for minute dissections of a single document or passage.

And don’t forget, these capabilities are all available for custom integration on your platform through the Ponder API.

INS publishes a steady stream of well produced videos of provocative conversations with thought leaders on a range of social, political and economic issues that start you pondering. Luckily, our API provides a way for readers and watchers to articulate that pondering quickly and thoughtfully. Their implementation also demonstrates the flexibility that the API provides in terms of integrating annotation and discussion into each partner’s unique look and feel.

So far they have integrated the video response interface, visible to the right of the Youtube embed in the screenshot below, with 8 sentiments and an elaboration box. Below the video, you can see the response timeline, with tick marks indicating the points in the video to which users have commented. One user’s comment is selected and the sentiment and elaboration are visible.Of course you don’t have to squint at the screenshot, you can see this particular video piece (and others) live!

Parlor Labs’, the company behind the Ponder micro-response platform, is proud to announce a new partnership with Sunburst Digital, a leader in sales, implementation and support to k-12 schools across the country.

Sunburst has successfully implemented instructional technology and digital content solutions across US campuses for nearly three decades, and this spring we have worked with them to add Ponder to their library of instructional technology and digital content solutions.

Ken Leonard, chairman and CEO at Sunburst Digital commented, “We are excited to enter into a partnership with Parlor Labs, an organization dedicated to creating a social reading experience that both expands the breadth of what our students are reading as well as deepens their understanding of how we engage with the world around us.”

I speak for all of us when I say we are similarly excited to have their veteran support organization behind our product!

V2 is more robust and brings many enhancements to our white- and gray-label integration scenarios, and we are making it publicly available. At a high level it supports:

Account Creation & Authentication (SSO)

User & Group Administration

Retrieving Activity Data

“Native” User Interactions (without the browser add-on)

Ponder’s cognitive analytical emotional heat map works on video too.

These methods are designed around scenarios where partners layer the Ponder micro-response interface and heat map on top of their content (text and video,) extending their infrastructure to incorporate flexible, structured and thoughtful content-driven interactions between their users.

Of course, if you’re interested in integrating Ponder into your service, get in touch.

Along the way, we spent a bunch of time investigating various API documentation tools, and fell in love with Speca.io, so we wanted to do a shout out to them for making a great tool. A few great features:

We are proud to announce that Ponder is the recipient of a generous grant from An Chomhairle um Oideachas Gaeltachta & Gaelscolaíochta (COGG) to adapt Ponder for Gaeilge, the Irish language. The grant covers the costs of the language work and a professional development workshop to kick-off a pilot in schools across Ireland in preparation for broad availability in Irish-language classrooms and reading groups.

More than ten schools have registered their interest so far, and we will be running a day-long workshop in early December. Interested Gaelscoileanna and English-medium schools should contact us by selecting Webinar Request in this ticket form. We began running online webinars to provide additional background and answer questions and will be running more in the coming weeks.

Language teachers are familiar with the challenge of fully immersing students in a language. Reading casually in the language and chatting with friends are important parts of building and maintaining fluency. Outside of assigned homework, Ponder supports these activities by creating a pedagogically-sound social media environment for students to practice Gaeilge.

Adapting Ponder for a new language is both a linguistic and a cultural translation process and is always fascinating.

Pilot Details

Beyond the opportunity to shape Ponder for Gaeilge, participating schools will receive:

A free year-long site license of Ponder for all of their teachers and students

A travel stipend for one teacher to attend a day-long workshop on using Ponder

ICT Implementation support

The Ponder workshop will include:

An overview of Ponder and common implementation strategies.

Small group brainstorming of lesson ideas by subject area.

1:1 hands-on setup of classes, using materials teachers bring with them to the session.

Behind the scenes here at Ponder, we have been slowly expanding our language coverage in collaboration with enthusiastic Ponder educators! Beyond the work on Gaeilge, we now support Español and عربي, and have a sentiment set in progress for Rwandan.

Adapting Ponder for a new language is both a linguistic and cultural translation process – you can’t create a slang-infused critical thinking scaffold without a lot of head-scratching and word play. One component is identifying and incorporating relevant idioms and proverbs to provide a more fluent and poetic discussion experience. It’s a collaboration with native-speaking educators, and requires classroom time to get the gather the feedback necessary to get the tone of individual sentiments correct as well as getting the distance betweendifferent sentiments correct.

If you’d like to work with us to create a sentiment set in another language, let us know!

Students lose their way in school for a broad range of reasons, many of which are non-academic, and the hope behind Robin Hood’s prize is that technological solutions can help them scale already proven strategies for improving matriculation rates.

Self-control – how to shore up self-control through social bonds, incentives and tricks you play on your own psyche.

Prospective Memory – not only remembering to do something, but following through to actually do it.

Social Norms – the human tendency to choose “normal” over “right”.

On the whole the sessions were lively, peppered with informal experiments, anecdotes and studies that illustrated key points through examples rather than jargon and formal definitions. Every session provoked incisive questions from the audience. For our part, we walked away with much more specific ideas for the design and implementation of our solutions as well as a host of questions for the folks at Robin Hood and CUNY, mostly around how the program will be introduced to the students.

Sendhil Mullainathan’s well-argued presentation on the Psychology of Scarcity made abundantly clear how poverty in one area of life (financial) creates poverty in another (academic performance). Study after study showed how even subtle reminders of financial stresses degraded cognitive performance. (You can get a synopsis from his New York Times piece on the same topic.)

Given that the nature of our relationship with the students will be long-term, another question we had was: Does the effect of priming wear off over time with repeated exposure, positive or negative?

Another issue this brought up for us is whether the mere fact that students are participating in this program remind them of their “remediation” status thereby undermining our efforts to bolster their performance? As we understand to date, only remedial students will be using our technologies. Are we missing an opportunity to build technologies that help remedial students feel a part of the CUNY community as a whole?

Filling out Gigantic Forms

William Congdon talked about hassle factors. We could all relate to the hassles of coordinating calendars that span different aspects of life (work, school, childcare, family, commuting). Ideas42 in particular is working on improving the onerous process of applying for financial aid. Two approaches that came up repeatedly was the idea of 1) defaults to reduce the cognitive load of making decisions; and 2) pre-filled out forms to remove the hassle factor of having to “look up” information the university already has. So we’re wondering:

Will participation be mandatory or will students be asked to decide? If the latter, will their participation be assumed with the option to opt out or will they be asked to opt-in?

Will students be pre-registered, or will they need to go sign up? Can we piggy-back on their CUNY accounts?

Will we have API access to student schedules and class syllabi, or will we need to ask the student to provide that information to us separately?

Information Overload

William also covered Limited Attention, a familiar topic in modern day life. One interesting tidbit from this session: It’s generally assumed that students’ preferred mode of communication is SMS. However, like Twitter, email before it and perhaps Yo! to come, what happens when every system and organization shifts to using text messages? More specifically, how will our communications with students interact with / collide with CUNY’s existing student support program START?

Hey, remind me to…

Matt Darling presented on Prospective Memory, the art of following through on future commitments. Memory, or the lack of it, is clearly the first problem to overcome. But assuming you are able to implement some kind of reminder system, how do you actually make those reminders count? Hassle factors and self-control (see below) come into play for sure. But Matt pointed to the power of “being specific” as one simple technique that doesn’t rely on the student to be more disciplined.

It made us reconsider how we’re thinking about designing our reminders. When you send them, how often you send them and the language you use in them of course remain important factors. But what exactly are you reminding the student to do, and how do you want them to respond to the reminder is where the real design problem lies.

Specifically for us, we’re working on ways to make tasks more concrete and bite-sized (aka, doable), tasks students can easily imagine completing successfully in a limited amount of time.

Creating Community

Allie Rosenbloom reminded us of the now famous marshmallow test. Self-control or willpower is a tricky issue in light of Sendhil’s ideas about scarcity. In an environment of scarcity, there simply isn’t a lot of self-control to go around. Social supports and personal incentives (e.g. betting against yourself) were 2 approaches discussed. The challenge we see ahead is how to create social supports through our technology when the students who will be using our service may or may not be in the same classes or even campus due to the structure of the Randomized Controlled Trial (RCT). We are encouraged though that the student population will be big enough that we can build community around shared interests and career aspirations, if not coursework. Allie’s talk also supported the idea that “getting specific” with tasks would be a boost to performance because as we all know, focusing on “exercising today” is a lot easier than thinking about the 20 days of exercise you committed to for the month.

What is everyone else doing?

Finally, social norms come into play – the emphasis of the studies noted here had to do with public service announcements intended to discourage problematic behaviors that end up reinforcing them. Examples included posters designed to discourage binge drinking that make the reader who doesn’t drink feel like they are abnormal, since everyone else must clearly be drinking, or provoke petrified tree theft rather than discourage it.

Most relevant here is that commuting community college students (which is the majority of them) often feel isolated, and don’t have a good sense of how other students are handling the challenge of college. Social norms seem most relevant to us in terms of Ponder providing an atmosphere where students feel connected with their classmates, are aware they are working hard, and engage with one another through their college and career interests. We wondered if we could coordinate with existing CUNY support services to reinforce the somewhat disparate nature of the randomized participating students, and provide an in-person, face-to-face experience.

We’re thinking about how we can use the data we have about student progress to reshape students’ sense of “what’s normal” when it comes to school. Our goal would be to not only show students how others like them are succeeding in school, but to also paint a realistic picture of how much time and effort it takes to succeed at school. At the very least we can prevent students from feeling discouraged because it takes them ‘too long’ to study; or because they feel uniquely selfish in spending so much time on school in light of their other obligations.

At the beginning of the day David Crook from CUNY voiced his enthusiasm for the teams and the prize; having had an opportunity to digest all of the above, we can’t help thinking it would be great to have a second webinar to drill into the data with this new perspective.

All in all I think I’ve demonstrated here that the workshop provided much food for thought and advanced our thinking greatly towards our prototype for January. We also finally got an opportunity to meet and learn from the other teams in the Challenge. Thank you Robin Hood and ideas42 for organizing!

This costs money, but it will also be a constant source of distractions. And b/c of helpful things like Murphy’s Law those distractions will happen when you really don’t need them, to the students who really don’t need them.

If you measure online-texts against what paper-texts are good at (freedom from distraction, physical cues to provide context and focus your attention), it is no revelation that online texts will lose every time.

So, if you are thinking of trading a paperback or a Xerox in your classroom for a screen, don’t do it?

Unless you have a really good reason. Like, if my students can look up words while they read they’re much more likely to keep at reading hard texts. If my students have a smart way to track their reading across lots of different documents, they have a much easier time seeing the connection between texts and as a result write better papers.

These are things computers are good at. So if we start with what computers are good at and we measure paper texts against online texts, there should also be no surprise that online texts will win (provided the software delivers on its promise).

So here’s a different rule of thumb to consider, one informed by the research I cite above and reinforced by countless conversations with teachers:

If you’re considering moving to e-texts, don’t, unless it does nothing short of transforming your classroom in ways that paper can’t and has something to do with learning, not functionality.

ie. This tool will help me push my students to re-read passages they didn’t fully understand which in turn will get them to be more proactive about asking questions in class. As compared to: This tool makes it possible for my students to see each other’s comments as they read. (The latter is a description of software functionality, and a rather high-level one at that, which may or may not be implemented in a way that has pedagogical value.)

Sometimes an instructor knowing specifically why and how they’re going to use a certain tool makes all the difference in efficacy, so two teachers using the same software can experience drastically different results.

Other times, getting specific with what you intend to use software for is precisely the “missing information” you need to separate the wheat from the chaff when evaluating tools.

In other aspects of our lives, this would be considered stating the obvious. After all, knowing what I care about in a product is how I evaluate the relevance of other people’s reviews of said product, which is why online review forums typically ask if you found the review helpful, as opposed to if you found it informative.

However, for whatever reason, this is still something we’re learning to do with edtech.

In either case, the logistical wins of going from paper to digital alone are not big enough to offset the logistical problems of managing digital, or the cognitive hit we all take when reading from a screen.