Blog

I’m using my first blog post of the year for a little self-promotion: I have a new paper out!

In this year’s International Computing Education Research (ICER) conference proceedings, Carla Strickland, Diana Franklin, Andrew Binkowski, and I share our recently developed learning trajectory intended to guide instruction on the computational thinking (CT) concept of decomposition for students in K-8 (Rich, Binkowski, Strickland, & Franklin, 2018).

That was a lot of ideas in one sentence. Here’s a brief introduction to several of the ideas I just mentioned.

First, a learning trajectory (LT) is a possible pathway from a student’s existing knowledge to a desired learning goal. Martin Simon (1995) first used the term while describing the ways teachers must negotiate between knowledge of their students’ thinking and knowledge of the mathematics they are intending to teach. One purpose of an LT is to manage the tension between the needs for advanced instructional planning and for spontaneous, responsive instructional decision-making in the classroom (Simon, 1995). LTs have since become a popular theoretical construct among curriculum developers and professional development providers seeking to base their materials on research in student thinking (Clements & Sarama, 2004; Sztajn, Confrey, Wilson, & Edgington, 2012). We — meaning a group of colleagues at UChicago STEM Education — were interested in developing instructional materials for K-8 students to learn computational thinking concepts, and so we first set out to develop some LTs for CT through the LTEC project.

Second, faithful readers of my blog right now will certainly know that computational thinking is loosely defined as the thinking processes used by computer scientists (Wing, 2006). CT is quickly becoming a new kind of literacy all students will need to be productive and engaged citizen in our technology-oriented world.

Lastly, decomposition, or more specifically, problem decomposition, is a process of breaking down problems, objects, or phenomena into smaller, more manageable parts. We think of it as a computational thinking practice. Just as modeling, pattern-finding, and generalization, for example, are mathematical practices, decomposition is a computational thinking practice.

So, to return to the paper: It shares our work in developing a decomposition LT intended to guide CT instruction in K-8. Check out the full paper to read about the processes of synthesis and theoretical frameworks we used to guide the development of the LT. Our aim was to make the best possible use of existing research evidence about students’ learning of decomposition to form a starting point for curriculum development.

Spoiler alert: There is a lot more research out there on students’ use and creation of procedures and functions then there is research about students’ overall processes of problem solving through decomposition. Our LT-building efforts made a start at connecting use of procedures to broader decomposition ideas. For example, the LT suggests that a productive intermediate learning goal might be to fluently and flexibly connect existing functions to decomposed parts of complex problems. It may be that such ideas are taught in CS courses, but according to our review, they have seldom been mentioned or studied in the K-12 CS education literature. Future research may well reveal other core but tacit ideas that would be productive to explicitly address in decomposition instruction.

Our LTEC research team has a lot of experience in the K-5 space, and so many of us are particularly interested in how decomposition ideas could be addressed with young students before they begin programming. Two particular ideas seem worthy of mention here.

First, the LTEC team is curious about the relationship between early work with decomposition and early work with another CT idea for which we already developed an LT: sequence. We previously used research evidence about students’ abilities to parse stories into steps to support our sequence LT. Through the development of the decomposition LT, we also came to see this as a kind of decomposition. We are not bothered by the duality in principle, but the double-use of this idea made us wonder whether the difference between sequencing and decomposition will feel meaningful to young students — and what the implications of the answer to that question might be for K-5 CT curriculum development.

Second, I have been thinking a lot about how decomposition in CS/CT relates to decomposition in mathematics. In both the LTEC project and my work at MSU with the CT4EDU project, one of the goals is to develop integrated mathematics and CT instruction for students in K-5. A big part of this work is to identify key ways that ideas are used similarly in the two disciplines and figure out how to leverage the similarities in instruction. Decomposition seemed at first quite similar in CT and math, but close scrutiny has led me to examine an interesting divergence.

In mathematics, the thing being decomposed is usually some kind of mathematical object — a number or shape, for example — and not the problem itself. Students decompose 25 into 20 + 5, or decompose a rectilinear figure into rectangles. The purpose of this decomposition is often for the purpose of solving a complex problem, like multiplying 25 by another number or finding the area of a rectilinear figure. However, the connection between the decomposition of the mathematical object and the decomposition of the problem is not always made clear. In the Common Core State Standard for Mathematics (CCSS-M) 3.MD.7d, the connection to the problem is clear: “Find areas of rectilinear figures by decomposing them into non-overlapping rectangles and adding the areas of the non-overlapping parts.” In the CCSS-M 3.NF.3b, however, students decompose fractions with no purpose stated: “Decompose a fraction into a sum of fractions with the same denominator in more than one way.”

So, decomposition is not always discussed in CT-friendly terms in mathematics. Oddly enough, I think this divergence makes decomposition one of the strongest candidates for integration of CT ideas into elementary mathematics. In this case, adopting the CT practice of focusing the decomposition on the problem has the potential to be a subtle, achievable instructional change for teachers that:

Makes certain mathematical tasks more meaningful to kids. (You practice decomposing fractions because you can use those decompositions to help you add later!)

Gives kids an introduction to a basic CT idea in a context that fits easily into core instruction in elementary school.

Cool, right?

This and other math-CT connections will be the focus of my contribution to the ICER doctoral consortium. You can check out my abstract for that here.

This week marks the end of my first year as a PhD student! First academic year, at least — summer courses and a continued half-time assistantship await me in just a week’s time. But, still, three years from about right now, I hope to be graduating. Time has a weird way of dragging and flying by at the same time.

As at the end of last semester, I thought I’d share a few more things I learned.

The MSU Dairy Store has really, really good ice cream. I know that sounds like a weird thing to lead with, but I think it will be key to my survival over the next three years.

Courses are investments. Every course you take — especially as a graduate student — will dominate a significant amount of your time and attention during a semester. And unless you’re on the 10-year-PhD plan (don’t be that student), you can only take so many courses during your graduate career. Because graduate school is a place for specialization, there are a huge number of courses available that delve into a huge number of niches. You can’t take them all. Course need to be chosen carefully and thoughtfully, and with advice from lots of people.

Course instructors matter. One of my areas of research interest is understanding the role of the teacher in tech-infused teaching and learning. I know a thing or two about how important teachers are to the learning process. Give that, you would think that I would have realized much sooner how much impact an instructor has on a course. But it wasn’t until this year that I realized the importance of considering the instructor when making course choices. Advice about graduate school often includes talk about finding a good intellectual and personality match when choosing an advisor. Relationships with course instructors are much shorter in duration, but it’s still worth considering that fit when you have options about when and with whom to take a course.

Conducting good interviews takes a special kind of listening. I did 18 interviews this year — something I’m rather proud of, as the idea of conducting interviews really freaked my socially-anxious self out. They all had their rough patches and awkward moments, but I did feel like I got better at it over time. Before this year, I imagined that the key to a good interview is asking the right question. That is true in a sense, but what makes it really challenging is that the “right question” is different for every person. The key isn’t having really well-thought-out questions beforehand (although, of course, it’s important to have that, too). The key is to listen closely during interviews and use follow-up questions to help participants elaborate on the things that are most interesting or confusing. There’s no way to know all the “right questions” ahead of time.

The best research is conducted by people who care about what they are studying. This seems like a cliche, I know. But this semester I’ve come to realize that I underestimated how much that matters. Skilled and ethical researchers can, of course, complete a valid and sound study on anything. I’m not denying that. But without real interest in the research — without understanding exactly what problem the research is trying to solve or what issue it is addressing — researchers won’t dig as deep as they would otherwise. It’s genuine interest in a topic that serves as the impetus to really push on a problem, question and press on the findings, and use results to make meaningful decisions about what to do next. That is why my biggest and most important goal in my time here is to complete a practicum and dissertation project that really matter to me. It seems like it should be simple enough to do so, but I’m finding it more challenging than I anticipated. It’s so easy to grab at the low-hanging fruit or to choose the things that other people suggest. And sometimes those things can be worthwhile. But sometimes they end up just filling my time instead of really piquing my interest. Figuring out what matters to me is really hard and tiring work — but so, so important.

Thanks so much to all of you who have read any or all of my posts this year. This blog has been a really important tool for me to organize my thinking and try out new ideas. I’m taking next week off, but I do plan to keep writing this summer, even if it is at a bit of a slower cadence than every week.

I spent the early part of the week at the NCTM Research Conference. One of the symposia I attended was called “Contrasting Perspectives on Multiplication, Area, and Combinatorial Problems” (Izsak, Jacobson, Tillema, & Lehrer, 2018). One of the presenters, Dr. Erik Jacobson from Indiana University, shared a mapping he created between three models of fraction multiplication and three common contexts for such problems.

The three models were as shown below. In the overlap model, students use one factor to partition and shade a shape vertically, the other factor to partition and shade the shape horizontally, and then find the product by expressing the double-shaded part as a fraction of the whole shape. There is only one referent for all three fractions in the problem — each factor and the product is considered in relation to the whole shape.

In the part-of-a-part model, students use one factor to partition and shade a shape in one direction (vertically in my example below), use the second factor to partition and shade only the already shaded part in the other direction, and then find the product by expressing the double-shaded part as a fraction of the whole shape. In this case there are two referents. The first factor and the product are considered in relation to the whole shape, but the second factor is considered in relation to only part of the shape.

In the length-area model, each factor is interpreted as a fraction of the length of one of the shape’s sides, and the product is the fraction of the whole shape that a rectangle with those side lengths takes up. In this case there are three referents, with each fraction in reference to exactly one of them: The horizontal length of the shape, the vertical length of the shape, and the area of the shape.

Dr. Jacobson went on to explain the ways different multiplication problem contexts match or do not match these differing numbers of referents. I didn’t manage to scribble down or retain his whole mapping, but there was one part that stuck with me: fraction-of problems (which the presenter called unit-conversion problems) can’t be coherently mapped onto the overlap model because of a mismatch in referents.

For example, consider this problem: Katie has ⅔ of a pizza. She eats ¾ of that. How much of the pizza did she eat? There are two referents in this problem. The ⅔, as well as the requested final answer, are fractions expressed in terms of the whole pizza. But the ¾ is expressed in terms of the ⅔ pizza that Katie starts with. This maps onto the part-of-a-part model. The overlap model is a poor fit because it only has one referent rather than two.

This point really resonated for me, because during my most recent bout of curriculum development, one of the many lessons I wrote was about using fraction-of thinking to solve problems like the pizza problem above. The previous edition of our curriculum emphasized the overlap model for solving those problems. I campaigned, successfully, to switch to the part-of-a-part model in this most recent edition. Dr. Jacobson’s point was essentially my argument, although I did not have the vocabulary to express it well at the time. My first thought upon hearing it come out of someone else’s mouth was one of satisfied validation.

This, as per usual, was immediately followed by some self doubt, because I also remembered the arguments against making the change. Some of our field test teachers, as well as some other staff members, didn’t see the benefits as strong enough to justify changing a tried-and-true lesson that kids tended to struggle with at first but ultimately understand. So even though I now had clearer vocabulary and arguments to justify the switch, remembering the debate led me to ask myself: But does that mismatch really matter to kids? Is that extra bit of shading really going to have detrimental effects to their later understanding?

I’ve been pondering this the last few days, and finally came to a (likely temporary) conclusion after revisiting some literature on manipulatives. Why do we have kids handle concrete objects in early mathematics classes? The main argument is that these objects help kids to connect their concrete knowledge from their own experiences to the abstract mathematical knowledge we hope they learn (Uttal, Scudder, & DeLoache, 1997). In my view, models like the ones shown above are meant to do the same thing. And if kids can’t directly connect the models to the problem contexts (which in turn are supposed to be connecting mathematics to their real-world experiences), then I don’t think the models are serving that purpose.

I don’t doubt that kids can successfully solve fraction-of problems using the overlap model. But when the model doesn’t map onto the problem context — or perhaps more importantly, when the problem context can’t be mapped onto the model — the models are probably serving as just another kind of procedure for them to execute. It may be a more meaningful procedure than a numerical algorithm, but it’s still a procedure that exists separately from the problem. The connection between the concrete and the abstract isn’t being made. I don’t buy that the overlap model will promote misconceptions, per se. But I do think it does less work in terms of helping kids wrap their heads around fractions with multiple referents and how that relates to real-world problems.

The answer to whether or not the change from the overlap to the part-of-a-part model was a good one, then, depends on what we were most concerned about kids taking away from the lesson. Considering that kids derive the numerical algorithm a few days later, I’m glad I campaigned for the shift to a model that might better support concrete-to-abstract connections. That seems more important than giving them procedure they can execute.

References

Izsak, A., Jacobson, E., Tillema, E., and Lehrer, R. (2018, April). Contrasting Perspectives on Multiplication, Area, and Combinatorial Problems. Symposium at the National Council of Teachers of Mathematics (NCTM) Research Conference in Washington, D.C.

I admit it. I don’t know, from memory, what 7 × 8 is. Every time I need to know the value of 7 × 8, I think to myself, Seven times seven is 49, plus 7 more is 56.

To tell you the truth, I also do something similar for 7 + 8. I don’t look at that and just know the sum is 15. I think, Seven plus seven is 14, plus one more is 15. Or, sometimes, Seven plus three is 10, plus 5 more is 15. I know a lot more addition facts from rote memory than I do multiplication facts, but I still often have to derive them. And I have a mathematics degree.

If your reaction to this confession is, So what?, I don’t blame you at all. Even though I’ve been at least partially aware of my fact derivation habits for as long as I can remember, I never talked about them until 6 or 7 years ago. They seemed utterly unremarkable and uninteresting. I thought everyone thought about numbers this way.

That turned out to be untrue. When I started working in curriculum development for early elementary school, I learned that one of the great debates in this realm is about student memorization of basic facts. Most researchers and curriculum developers seem to be in agreement that students need to have facility with basic facts. They are a building block for solving more complex problems. The debate rages around how to promote students’ learning of facts.

The primary instructional strategy for this for a long, long time, was to have students take timed facts tests. Answer these 100 addition and subtraction facts in 90 seconds! Young students were pushed to memorize the facts early so the curriculum could simply move on to other, more complex mathematics topics. However, it turns out that timed testing is associated with higher levels of math anxiety (Ashcraft & Moore, 2009). The timed tests, although intended to help kids build a strong mathematical foundation, actually have the effect of turning a lot of kids away from interest in mathematics. So, there’s been a push against timed testing.

Critics ask, Well, what are we to do instead? Just let kids count on their fingers for the rest of their lives? Nope. There are other ways to think about facts and fact learning. One strategy, of strong personal interest to me, is being explicit about teaching kids derivation strategies like the ones I use. I think of it as the difference between giving a kid a fish and teaching a kid to fish. You can force-feed kids the product 7 × 8, or you can teach kids how to figure it out quickly when memory fails.

In short, my personal story is this: The only time in my life I’ve ever hated math is when I had to take timed fact drills in elementary school. The fact that I have to derive 7 × 8 has never been a problem for me because I have strategies for figuring it out quickly and mentally. So I think, on a personal level, that we need to be teaching fact strategies.

I know my own personal story isn’t going to get much traction in the research realm, though. So imagine my excitement when I conducted a study with a colleague that used data from hundreds of thousands of kids to illustrate the promise of a strategy-based approach to fact learning. Read a brief guest blog post about the study here, on McGraw-Hill’s website. If you’ll be at NCTM on Tuesday, come and see my talk to hear about the study in more detail.

I’ve been thinking a lot this week about the role of constraints in learning.

I first got interested in this idea a few years ago after reading a review paper on virtual manipulatives. Moyer-Packenham and Westenskow (2013) conducted a meta-analysis of studies of virtual manipulatives (VMs), finding an overall moderate positive effect on learning. They also examined the studies to identify specific affordances of VMs that seem to contribute to the positive effect. They described one such affordance as follows:

One affordance identified during the conceptual analysis was focused constraint. Constraining and focusing features included: bringing to a specific level of awareness mathematical aspects of the objects which may not have been observed by the student; and, applets focusing student attention on specific characteristics of mathematical processes or procedures. (Moyer-Packenham & Westenskow, 2013, p. 42)

Moyer-Packenham and Westenskow’s (2013) pointed me to a few nice examples of the ways constraints can influence learning, and I’ve found a few more within the VM literature. For example, constraints can promote efficient problem-solving strategies. Manches, O’Malley, and Benford (2010) found that when a VM constrained students to move only one counter at a time, they were more likely to use compensation strategies to find number combinations. That is, rather than starting from scratch when finding number combinations, students were more likely to make small adjustments to the combinations — transforming 6 and 3, for example, to 5 and 4 by moving one counter. Students were less likely to do this when using physical manipulatives, when they could move many counters at once. Maschietto and Soury-Lavergne (2013) described how they made design decisions with a VM to promote efficient strategies. For example, they sometimes removed a button that allowed for adding 1. This constraint was meant to encourage students to instead add 10s to complete a task.

Constraints can also draw attention to particular elements of mathematics that tend to pose difficulties for students. Evans and Wilkins (2011) found that the tools students used to manipulate the pieces of a virtual tangram, while somewhat restrictive in that they separate rotation from other moves, focused their attention on the underlying geometry. By contrast, students using physical pieces, where movement was not restricted, did not discuss the underlying geometry. Hansen, Mavrikis, and Geraniou (2016) described a virtual fractions manipulative that shows the numeric sum when students add two fractions with common denominators, but does not show the numeric sum when they add fractions with unlike denominators. Students see a visual representation of the two combined fractions, but the lack of a numeric answer prompts them to think about how to express the sum numerically. The authors described one teacher’s thoughts about how this constraint helped a student overcome a tendency to add fractions by adding numerators and adding denominators.

All in all, these articles really pushed my thinking about constraints. The word constraints tends to carry with it some negative connotations, and I found it really interesting to think about the positive effects they can have on learning. They can promote efficiency and help student think and multiple levels about a problem — about the overall problem solving task or procedure as well as the underlying mathematical ideas.

With that summary, my train of thought switched tracks a bit. Efficient strategies and multiple levels of abstraction — what does that sound like? Computational thinking! I’m deeply interested, now, in how VMs can play a role in transforming elementary mathematics to support CT and get kids ready for computer science. That’s something I hope to explore further in my remaining years in graduate school. Stay tuned for further posts on that.

But staying with the idea of constraints a bit longer, connecting constraints to CT did make me remember a conversation I had with the LTEC team a while back. We were developing a learning trajectory for sequencing, and discussing this particular learning goal: “Choose from a limited set of instructions a valid set to accomplish a particular task.” We came to realize we had differing ideas about the effect of the constraint, “a limited set of instructions.” I had been thinking about it as a scaffold: choosing among a few options can be easier than coming up with the answer out of nowhere. But other team members pointed out the constraint can actually add difficulty: It’s easier to express something using any words or actions you want than it is to express the same idea using only a limited set of options.

So what’s the difference? Why are some constraints helpful and others not? I think key lies in the source of the constraint. When constraints are intentionally built in to an educational artifact or task by a thoughtful designer, they can be really helpful to learning. When tasks are constrained by the real-world context of the problem — for example, the particular commands available in a programming language — those constraints pose learning challenges. Still, they are challenges we need to help students overcome. Designers of educational interventions would do well to keep both kinds of constraints in mind.

Manches, A., O’Malley, C., & Benford, S. (2010). The role of physical representations in solving number problems: A comparison of young children’s use of physical and virtual materials. Computers and Education, 54(3), 622–640. https://doi.org/10.1016/j.compedu.2009.09.023

In two of my courses this semester, we’ve spent some time talking about well-structured domains (WSDs) and ill-structured domains (ISDs) and the ways in which beneficial instruction might look different for each. A well-structured domain is one in which all concepts and procedures can be readily delineated and described. By contrast, “[i]ll-structured domains are characterized by being indeterminate, inexact, noncodifiable, nonalgorithmic, nonroutinizable, imperfectly predictable, nondecomposable into additive elements, and, in various ways, disorderly” (Spiro & DeSchryver, 2009, p. 107). As examples, Spiro and DeSchryver (2009) describe the idea of muscles bending a joint as a complex, but ultimately well-structured domain. There are many facets to this process, but in the end all those facets can be well described and the way in which muscles work is consistent across human beings. By contrast, the concept of justice is an example of an ISD because its application and meaning across instances will vary considerably.

Spiro and DeSchryver’s (2009) principle argument is that highly guided and direct instruction may well be the most effective approach for learning material in WSDs, but such approaches are ineffective in ISDs. Providing definitions and a discrete set of examples of justice, for example, serves to provides students with a narrow understanding of the term and can promote significant misconceptions. Spiro and DeSchryver argue that instruction in ISDs should therefore facilitate “a nonlinear criss-crossing of knowledge terrains to resist the development of oversimplified understandings” (p. 115). Students should be given experiences that help them apply existing pieces of knowledge about justice, for example, in new ways, so that they can think flexibly about the concept.

For the most part, I buy into this idea. I agree that we tend to teach kids to think too rigidly about concepts. Here’s my problem with the discussions surrounding this, though: While history and philosophy are the typical examples given for ISDs, mathematics is almost always used as a go-to example of a WSD. For mathematics, the narrative says, direct instruction is just fine!

This deeply bothers me.

I’m not going to try to make the claim that structure doesn’t play a huge role in mathematics. I completely understand why people think of mathematics as well-structured. Being a mathematician is, in large part, about seeing structure in an ill-structured world. My worry is that references to mathematics as a WSD justify the perpetuation of mathematics instructional practices that are both problematic and deeply entrenched in our instructional system.

Take for example, the standard algorithms for the four basic operations. Claims that direct, highly guided instruction is optimal in mathematics would suggest that explicit teaching of the standard algorithms is fine. Yet we’ve known for years that rote teaching of algorithms, without accompanying opportunities to invent algorithms, is harmful to kids’ number sense and understanding of place value (Kamii & Dominick, 1997). And what about word problems? If direct instruction is suitable for mathematics, it would seem that all those superficial word problems we place at the end of lesson problem sets are just fine. Yet we’ve know for years that students’ school experiences with word problems leads them to dissociate school mathematics with sensemaking — they don’t take context into account when solving word problems (Silver, Shapiro, & Deutsch, 1993).

This has gotten me thinking harder about whether or not I believe mathematics is really well-structured in the way that Spiro and DeSchryver (2009) describe. Does addition, for example, have a straightforward definition? If we’re thinking about it as an operation on abstract numbers, then maybe it is. But if we’re thinking about how it applies to contexts, then I do think it can have many means. Addition, after all, has multiple use cases. It’s useful when you want to find a total (4 blue fish and 3 red fish, how many all together?), make a change that results in more (4 blue fish and 3 more blue fish come, how many blue fish now?), understand a comparison (I have 4 blue fish and 3 more red fish than blue fish — how many red fish?), and so on (Usiskin & Bell, 1983). Given this, is addition any more easily described to students than the ill-structured concept of justice? I am not so sure.

I like Spiro and DeSchryver’s (2009) call to help students “criss-cross” ill-structured domains to avoid oversimplified understanding. I just wish that mathematics were not so often casually discussed as the domain where it does not apply.

References

Kamii, C., & Dominick, A. (1997). To teach or not to teach algorithms. The Journal of Mathematical Behavior, 16(1), 51-61.

Silver, E. A., Shapiro, L. J., & Deutsch, A. (1993). Sense making and the solution of division problems Involving remainders: An examination of middle school students ’ solution processes and their interpretations of solutions. Journal for Research in Mathematics Education, 24(2), 117–135.

My contribution to the conference was a brief presentation about the ways in which we (meaning the research team I’m a member of here at Michigan State) have been thinking about bringing computational thinking (CT) into teacher education (Rich & Yadav, 2018). Hopefully, I’ll be able to point to a few publications about these ideas soon, but in the meantime, the crux of our argument is as follows:

Starting out with unplugged CT activities (that is, activities that do not involve any tech), and building first to low-tech and then high-tech activities, will build on what teachers know and are comfortable doing (National Research Council [NRC], 2011).

I talked about a series of fractions activities to illustrate our proposed no tech to low tech to high tech continuum:

No tech. As an entry point, preservice teachers (PSTs) could think of a fraction, such as ⅔, as an abstraction — a symbol that represents a lot of coordinated ideas, including two different numbers of parts and the ideas of equal-sized pieces (Confrey & Maloney, 2015). They could use decomposition to break down the task of identifying examples of a particular fraction into a series of progressive sorts of fraction cards. This would be an entirely unplugged activity that highlights key CT ideas.

Low tech. A next step might be using virtual manipulatives to help students explore connections between different representations of fractions. Many available virtual manipulatives use simultaneously linking of representations (Moyer-Packenham & Westenskow, 2013), so that one representation automatically updates in response to changes to another. Using such linked representations can support students in making mathematical generalizations (Anderson-Pence & Moyer-Packenham, 2016). Discussing with PSTs (and helping them think through ways to discuss with their students) how the affordances of technology were helpful can start to connect CT to the power of technology. We think of these kinds of uses of precreated technology as low-tech activities.

High tech. After thinking through the connections between CT practices and technology, PSTs and their students could move on to creation of technology, via programming in any of the available student-friendly programming languages. For example, they could create an algorithm that compares a fraction to 1 based on the value of its numerator and denominator.

My presentation of our proposed no tech → low tech → high tech progression was the last presentation in a symposium on CT in teacher education. The conversation with the attendees, after the talks were completed, focused in on a key difficulty in bringing CT to teacher education: making the leap from unplugged to plugged contexts for applying CT is very challenging for teachers. Several attendees shared experiences that made this challenge apparent.

I really appreciated this discussion because it made me realize something that has been incomplete, or even backward, in my thinking about integrating CT in other disciplines. As I wrote about a few weeks ago, I think a key challenge in bringing CT to kids via integration with mathematics is figuring out ways to help student apply the CT ideas already embedded in mathematics to computer science. That was the leap that concerned me. When thinking about teachers, on the other hand, my concern was reversed. Perhaps because I first learned about CT in the context of computer-science-specific initiatives, I pictured teachers learning about CS/CT in isolation, and then needing help seeing the CT in the context of other disciplines.

In short, for kids, I thought about the leap from CT in math to CT in computer science. For teachers, I thought about the leap from CT in computer science to CT in math. I realize now that this separation is artificial. Either way, the leap is going to be difficult. What we’re talking about is transfer — something that is notoriously difficult in education no matter what the context.

Will thinking about progressions like the one I outlined above help with transfer? I don’t know, but I don’t think it can hurt. I’m really excited to start exploring these ideas more.

During my middle- and high-school years, I was what I called a “two-school kid”: I attended a large public high school in the mornings, and after lunch I bused or drove across town to a learning center called the Center for the Arts and Sciences (CAS). Attendance at the CAS required choosing a specialization. Each student spent a full morning or a full afternoon in classes in language arts, global studies, 2D or 3D visual arts, voice and keyboard, dance, theater, or, in my case, math and science. For the most part, this was an amazing experience. I took two semesters of college-level calculus in high school and also got to do some amazing hands-on exploration in science that I have since learned are not at all common in U.S. schools. I don’t have many complaints about my experiences at the CAS. Still, there is one aspect of my time there that bothered me then and still, now and again, bothers me when I think about it now.

Although most CAS students chose their own specialization, some of us still felt… sorted. It often felt like our choices served as ways to separate us. Specialization allowed the time and space to explore topics in great depth, but it also led to some “us” versus “them” mentality. This was particularly pronounced in the relationship between the math/science department and the theater department. I had a few friends — great friends! — in the theater program, but this didn’t mean that I was unaware of the general narrative that math/science students were serious, studious, and nerdy, while theater students were creative, fun-loving free spirits. It wasn’t the oversimplification that bothered me. It was the separation. I didn’t see either of the characterizations as inherently better than the other. It wasn’t that I ever wanted to switch to the theater program and felt like I couldn’t. It was more that sometimes I wanted to be seen as someone who loved theater, creativity, and fun. But as long as I was a math/science student, that wasn’t a persona I was able to put on.

Fast-forward (eep!) 16 years to a few days ago, when I was reading James Gee’s (2013) Good Video Games + Good Learning. In the first chapter of the book, when discussing his idea of affinity spaces, Gee says,

We are never, none of us, one thing all the time. Sure, the world continuously tries to impose rigid identities on us all of the time. But it is our moral obligation — and one necessary for a healthy life — to resist this and to try to create spaces where identities based on shared passions or commitments can predominate. (p. 7)

I was really struck by his first sentence: We are never, none of us, one thing all the time. His point isn’t that we can be more than one thing, which is the argument I usually make about identities. My issue in high school wasn’t that I wanted to be a math/science student or a theater student. It was that I wanted to be both. The multiplicity was what I was focused on. But Gee takes this a step further. Not only can you be more than one thing, but you don’t have to be all of those things all at once and all the time.

This seems like a subtle and maybe obvious point, but it changed my thinking about the identity I’m most interested in from a research perspective: that of being a “math person.” It bothers me greatly the number of students and adults who will outright tell me that they are “not math people.” I think the pervasiveness of this narrative is a reflection of the inflexible way mathematics has always been taught. I often talk about one of my career goals as working toward a world where every K-12 student will say, “I am a math person!”

Most of the self-proclaimed “non-math people,” upon hearing this from me, remain skeptical, despite my passionate narratives about the work of people like Carol Dweck (1999) and Jo Boaler (1989). You just had a bad experience! I say. You are a math person, I swear! They never believe me.

Gee’s (2013) comments have me thinking that there might be a way to make my message more accurate, and therefore believable to even the skeptics. Perhaps people (adults, and younger students) can’t imagine taking on “math person” as an identity to wear all the time. But maybe they can imagine putting on a “math person” hat on occasion. Maybe the fluid nature of identities can help people realize that just about anything — even math — is within their realm of possibilities.

Being a math person doesn’t have to mean fundamentally and permanently changing who are. That’s my new thought for the day.

References

Boaler, J. (1989). Open and closed mathematics: Student experiences and understandings. Journal for Research in Mathematics Education, 29(1), 41–62.

In one of my courses this semester, we have an ongoing assignment as we work through the reading list. As we read, we are to pull out quotations that can be unpacked multiple ways. Rather than looking for passages that succinctly sum up one point, our goal is to pull out passages that can be connected to other aspects of the piece (and to other readings from the course) in multiple ways. To point out the connections we see, we’re tagging these multi-spoke quotations with themes.

One of our early readings in the course was Seymour Papert’s Mindstorms. There is one passage I picked out that I’ve been mulling over ever since:

Dynaturtles can be put into patterns of motion for aesthetic, fanciful, or playful purposes in addition to simulating real or invented physical laws. The too narrowly focused physics teacher might see all this as a waste of time: The real job is to understand physics. But I wish to argue for a different philosophy of physics education. It is my belief that learning physics consists of bringing physics knowledge into contact with very diverse personal knowledge. And to do this we should allow the learner to construct and work with transitional systems that the physicist may refuse to recognize as physics. (Papert, 1980, p. 122, emphasis added)

I really liked this passage for a lot of reasons. First, I noted the bold phase, because it illustrates the way Papert argues against the separation between play and learning — a separation I believe we as educators need to keep pushing against every single day. I also liked his reference (shown in italics) to “very diverse personal knowledge,” which I interpret as attention to equity. He doesn’t just mention personal knowledge. He mentions very diverse personal knowledge. I think this reflects a theme that runs through his book: he believes every student can and should find a personal connection to content.

Even though I like this quotation for multiple reasons, it’s the last highlighted phrase, in bold italic, that has been niggling in the back of by brain for a while. At the end of the passage, Papert argues that students and educators should take advantage of “transitional systems that the physicist may refuse to recognize as physics.” The word refuse is what first caught my eye, I think. Papert doesn’t say that physicists cannot or do not recognize work with dynaturtles as physics. He says they refuse. There’s an intentionality in that word. An implication of a decision to deny any connection between dynaturtles and physics.

I have not worked with any physicists in any educational pursuits, but I’m willing to believe that refuse was an appropriate word choice. In mathematics education, at least, one only has to look as far as the “Where’s the math?” debates (Heid, 2010; Martin, 2010; Battista, 2010; Confrey, 2010) to realize that there is evidence of refusal to recognize certain experiences as mathematics learning. While I do not go so far as to claim that disciplines have no boundaries, I do worry about refusals to recognize experiences as related to mathematics. As the incomparable Stephen Hawking (may he rest in peace!) once said, “People have the mistaken impression that mathematics is just equations. In fact, equations are just the boring part of mathematics” (Overbye, 2018). I think that such misconceptions stem from the refusals of professional mathematicians to recognize informal pursuits as mathematics — and I further believe that these misconceptions have contributed making the “I’m not a math person narrative” so prevalent and socially acceptable.

When I first read the Papert quotation above, some part of my mind thought it might be connected to computer science education, but I was not able to articulate the thought right away. A few days ago I finally put my finger on it.

The ongoing efforts to bring computational thinking into K-12 education have led to some discussions about whether or not the CT-related practices already happening in mathematics and science courses can rightfully be considered CT. As I wrote on this blog a couple of weeks ago, I do think there is CT all over elementary mathematics. I also think that explicit connections between these math-embedded CT ideas and CT as applied specifically to computing need to be made for kids. But in order to effectively develop instruction that facilitates those connections, I think we need to recognize CT — everywhere that it lives, in all the “diverse personal knowledge” (Papert, 1980) kids bring to and gain in school — as CT.

Why do I worry that we’re not doing that? Consider this passage from a recent study:

“[I]n our study, the result of student classroom observations, interviews with teachers, and participatory engagement rarely led to activities that fit the prototype of CT if CT means using elements easily recognizable to computer scientists as computer science, that is, creating algorithms and meta-level descriptions of code, coding, engaging in explicit acts of structure creation, structured top-down problem-solving, and so forth. Instead, we found important opportunities to address proto-computational thinking (PCT). PCT consists of aspects of thought that may not put all the elements of CT together in a way that clearly distinguishes them from other human intellectual activity.” (Tatar et al., 2017, p. 65)

At first, I was on board with “proto-CT” as a label for the mathematical and scientific ideas that have elements of CT. Now I wonder, though, if calling that knowledge by a different name is a refusal to recognize it as CT (as Papert would say). And if it is a refusal, I think we need to examine the implications of that refusal for bringing computer science to all.

I’ve been delving into research on virtual manipulatives this semester, and recently I’ve asked myself this question: Are virtual manipulatives curriculum materials?

From what I know of print curriculum-materials research (e.g., Choppin, 2011; Drake & Sherin, 2009; Remillard & Bryans, 2004), physical manipulatives haven’t generally been considered as curriculum materials in their own right. On the other hand, in the last few months I’ve read at least two studies (Hansen, Mavrikis, & Geraniou, 2016; Trgalová, & Rousson, 2017) that treat a digital manipulative as a curriculum material, studying their use quite similarly to the way that use of curriculum guides have been studied.

Why the contrast? I think there are at least two reasons.

First, virtual manipulatives have a lot more elements that can be intentionally designed. Actions on virtual manipulatives can be constrained (Manches, O’Malley, & Benford, 2010; Moyer-Packenham & Westenskow, 2013), sound and visual effects can be chosen (Moyer-Packenham et al., 2016; Paek, Hoffman, & Black, 2016), representations can be dynamically linked to each other (Moyer-Packenham & Westenskow, 2013; Sarama & Clements, 2009), and so on. Any sort of deliberate choice — of which there are many — by a designer is likely influenced by a particular imagined use.

Second, it’s common for virtual manipulatives to be embedded in apps or websites that either directly provide or are highly suggestive of specific tasks (e.g., Watts et al., 2016). This is in contrast to most physical manipulatives, which are generally sold and stored independently of any particular scheme of use. Activities (within curriculum materials or elsewhere) often assume their use, but the manipulatives themselves don’t assume a task the way that tasks sometimes assume use of physical manipulatives.

In all, virtual manipulatives seem much harder than physical manipulatives to separate from the tasks they’re used to accomplish. And tasks are at the heart of curriculum. While physical manipulatives have not generally been considered as curriculum materials, I think virtual manipulatives should be.

Why does it matter?

One of my biggest worries related to technology trends in classrooms is the overuse of adaptive-tutor-like programs that (claim to) use sophisticated algorithms to diagnose students’ exact position along learning paths and walk them forward. I think the desires for automated assessment and personalized learning are, to a point, understandable, but I worry that overuse of such programs removes the teacher from having a role in making instructional decisions. It speaks to a trend of deprofessionalization of teaching, which I think is a mistake. My hope would be that if we treat virtual manipulatives as curriculum materials — if we take seriously the teacher’s role in their use (Remillard, 2005) — it might prevent the potential of virtual manipulatives as teaching tools from morphing into attempts to turn virtual manipulatives into replacements for teachers.

Manches, A., O’Malley, C., & Benford, S. (2010). The role of physical representations in solving number problems: A comparison of young children’s use of physical and virtual materials. Computers and Education, 54(3), 622–640.

Trgalová, J., & Rousson, L. (2017). Model of appropriation of a curricular resource: a case of a digital game for the teaching of enumeration skills in kindergarten. ZDM – Mathematics Education, 49(5), 769–784.