Pages

18 December 2012

Last week, I had the pleasure of listening to a keynote by Charlotte Danielson. Her framework has had a place on my shelf for a long time. It was a key piece of my conversations with beginning teachers and a source of self-reflection. This framework is one of three being used throughout our state as a basis for new teacher and principal evaluation.

Evaluation.

It's an oft-dreaded word in the realm of teaching, or at least one that might not be taken seriously. I know I have been guilty of that, especially early in my career when there was just a simple checklist for the principal to use. I didn't feel that the tool (or follow-up conversation) was particularly useful...and really, would an observation a couple of times a year encapsulate all that I could do in the classroom?

In the intervening years---and the advent of high-stakes testing---evaluation has taken on a more sinister feel. What is your "value added" as a teacher?

But I'd like to set that aside for now, because the similarities between quality teacher evaluation and student evaluation are so striking. These were the four questions Danielson started with:

How good is good?

Good at what?

How do we know?

Who should decide?

This is not so different from how we approach student work. What is "good enough" and against what standard? How will you know when you see it? In most classrooms, the teacher is the "decider," but I continue to see more student self-assessment and conversations about how a grade is determined.

In general, we view teachers as experts about the students and subject(s) they teach---their roles as evaluators are not called into question. But when we take things up a level, I often hear distrust start to creep into the conversation. Does a teacher evaluator always know what good teaching looks like for any and all classrooms? Is the principal the best decider? I hadn't really thought about it this way before---this shift between teacher as student evaluator and teacher as subject of evaluation. There are lots of other things at play, of course, but the basic questions are the same for both.

In order to get a teacher evaluation system in place, Danielson said schools need

A clear and validated definition of teaching (the “what”)

Instruments and procedures that provide evidence of teaching (the “how”)

Trained and certified evaluators who can make accurate and consistent judgments based on evidence

Professional development for teachers to understand the evaluative criteria

A process for making final judgment

These pieces elaborate on the four questions above. But when I think about them within the context of a classroom, I have to wonder how grading practices would shift if all of these were tightened up a bit. There has been considerable energy devoted to developing standards (the "what"), assessments and rubrics (the "how"), and PD in both areas. PLCs don't necessarily replace becoming a "trained and certified evaluator," but I do think that more conversations about evidence of learning are happening.

But that last bullet?

For a long time, the $1M question in standards-based grading has been "How do you crunch the numbers?" In other words, if you have to give a summary grade at the end of the marking period, how do you determine it? I've seen a lot of different attempts to answer this question (I've even made my own attempt)...but I haven't seen a lot of agreement. And this area, for teacher evaluation, appears to stump Danielson a bit, too. Here is a copy of the slide she shared as she talked about this.

The idea here is that there should be a lot of evidence for different attributes, such as questioning and rapport. On the right side of the slide, we see two parts of the process: interpretation and judgment. This is not so different from assigning a grade---we assess often against various standards, consider what we see, and determine a final grade to represent the work. Danielson spoke about the need for a process, as well as the dangers and pitfalls of various approaches, but there was no bottom line in terms of guidance.

Whoever solves this problem in such a way that everyone is happy will make a mint...but I think it's an impossible task. Some are taking the seductively easy (and oversimplified) way out by just reducing it to number crunching. Humans evaluating other humans is always going to be a subjective endeavor, however. We might take comfort by reducing things to formulas---whether we weight student scores or teacher evidence. We might say that it's better than just leaving things up to the subjectivity of an evaluator ("What if my principal has it in for me?"). Just like all of the growing pains with standards-based grading and reporting over the last several years, we are going to have to figure out how to communicate about good teaching while separating it from a one-size fits all rating system. I hope that one (grading) will inform the other (teacher evaluation). Right now, the conversations feel very disconnected, as if they are two separate attempts...but I'm having a hard time trying to spot the differences.

10 December 2012

I've been thinking about this space. I know I haven't been around much this year. Some health issues have held my energy levels low...most days, it takes all I have to get through the workday. But I am feeling better and stronger all the time. I picked up blogging over at Excel for Educators about a month ago, mostly because I had more nagging about sharing ideas over there. But on this, the 8th birthday of this blog, it is time to re-awaken the space and rejoin the community here, too.

19 June 2012

In May, the first draft of the Next Generation Science Standards (NGSS) (think "Common Core," but for science) were released for public comment. The comment period is now over, and I know many of you posted on your blogs or submitted official feedback about the document. If you're looking for the document, you can download a copy here. There's nothing of use left on the NGSS website.

While there are plenty of comments that should have been made about the content and volume (400 standards? Really?) of the NGSS, the first thing that stuck out for me was the incredibly poor design. Here is a sample page:

What an unholy mess. Content is smashed together...organization makes no sense...and the colors are poor choices. Here is what the page looks like to someone who can't see or interpret the red part of the color spectrum:

Hmmm...not much difference between the middle and right columns in the middle of the page. The red text at the top has been completely de-emphasized. What about those who don't see shades of green?

A little better, but not much. I guess accessibility isn't a priority for the writers of these. Goodness knows building understanding was not.

I've been playing around with the design a bit. Here is what I have so far:

I can't figure out why the original document repeats the header in the first row of the table. Meanwhile, why put the abbreviation for the strand before the long form name? I've changed up the header to identify the grade level and name first. And, I've only listed it once.

You might not be able to tell from the original document, but the actual standards---what students should know and be able to do---are the component just under the table header. This should be called out, because everything else on the page is just supporting information...stuff that is nice to know, but not need to know. So, I created a separate section for the junk in the trunk: Connections.

K.OTE. c

When you look at the NGSS as it stands now, the pages are mostly taken up with the three columns of colour---and while the information there is interesting, it's really just an explanation of the learning targets. What the authors are (apparently) trying to say is that they've integrated three pieces of the framework (blue = science and engineering practices; yellow = disciplinary core ideas; green = crosscutting concepts) into one standard: K.OTE c: Use observations and information to identify patterns in how animals get their food.

I took the giant three-column copy-and-paste from the original framework
and slimmed it down to the main ideas. I did keep the three color
scheme, but employed hues that everyone will be able to differentiate.
If the standards are supposed to be an amalgam of the three areas (core
ideas, cross-cutting concepts, and science and engineering), then we
need a representation that shows this. I'm not sure that my idea for a graphic is the right one...perhaps some sort of "map" would be better; but I do know that the giant blocks of colour in the middle of the page are a stupid idea. I'm hoping one of you will have some better ones to share here.

CCSS lists are on the right. It's worthwhile information to provide, but they're not critical to learning targets themselves. I think these could either go in a supporting document/appendix or be represented differently.

On the bottom of my version, I copied in some language from the original document (connections to other DCIs at this grade level and articulation with others), but there is nothing on the original document which tells us what it means or why it's there. Maybe there will be some explanation in future drafts; but, if this is critical "big picture" information for implementing the standards, it deserves more attention than it's currently getting with the NGSS document.

The next---and final---draft of the standards is scheduled to be released in September. I don't know if we'll see many differences between the May version and the next link in the sausage chain. But I hope the NGSS group will invest in finding someone who can design something usable for everyone.

17 June 2012

Suppose you were magically transported to a rural school district with a total enrollment of ~350 students (K - 12). (If you already work in a district like this, get ready...company is coming.) Your new district has been getting by on whatever allowance the state or feds send their way. There are too few voters and property owners to make a levy worthwhile. You find that a lot of the things you have been using in your former classroom are no longer available. However, the district can choose one thing off the list below. Which one do you think would have the greatest positive impact on student learning? Think beyond testing here to encompass all the forms learning can take, then answer the one question poll at the bottom of this post.

Here is some additional background for the choices you see:

Instructional Materials: Some of the smaller districts in our
state are using math texts from the 1960's (or have no math curriculum
at all) because they can't afford to update. Science labs are woefully
outfitted (microscopes are older than math books). Assuming you have
computers for students to access, software options are limited. There
are schools in this state running Windows Millennium. Are current
instructional materials the most critical item for student learning?

Technological Hardware: Do you have a projector, document
camera, and/or whiteboard in your room? Do students have frequent and
regular access to computers/mobile devices? Most classrooms in the state
have at least one computer, a document camera, and projector. Is this
enough?

Common Curriculum Map and Assessments: Most small districts
do not have the capacity to develop common planning and assessment
tools. Not only do teachers have multiple preps each day (and for
multiple age groups), but the small student populations mean that
sharing the planning or co-teaching is not possible. If someone handed
you a roadmap that you could count on to free up your time for other
pieces of instruction, would you want that?

Instructional Coach: Your opportunities for professional
development in a small district are significantly reduced. Your
principal might be the instructional leader for the school, but chances
are that your principal is also the superintendent, another teacher, or
has a host of other duties. What if you had someone designated to
support you in the form of an instructional coach---help with planning,
provide additional resources, engage in reflective conversations?

Outside PD (conferences, workshops...): Do you think you're
going to be able to connect with peers at various conferences and
workshops? Where will you find a sub? What about all the travel and
registration costs? If conferences and workshops are critical to keeping
you current with instructional strategies and other components of
student learning, you'll have a sad in a small district.

Many other things can affect student learning: the physical
plant (some of our schools are in a significant state of disrepair),
quality of administration, community/parent support. I don't want to
dismiss the importance of those...or the connection between all of the
pieces that play a role in student learning. But this survey is just
about you the teacher. What's your pleasure?

10 June 2012

Teaching (and learning) should be a process full of reflection. The school year, however, is often unyielding in its insistence that we live in the present---not the past. It makes those moments where I get to slow down with teachers and really cuss and discuss into what is or isn't working in the classroom all the more satisfying. Another round of rangefinding is over for the year. While these conversations are among the most meaningful I have ever had, I also find them a little bittersweet because we can't dig into student work like this on a regular basis.

My personal take-away this year had to do with how we teach students to think about "credible" sources of information...and how we think about it, too.

My hunch is that we scaffold this skill for students in ways that ultimately turn out to be unhelpful. We give them black and white options for what is---in reality---a very gray area. We give students checklists...ask them to give us a yes/no response to any number of indicators about validity and bias...but this approach does nothing to create meaning for students (even if it does help them identify questions to ask). At the end of the day, how many "yeses" are enough? Are some "yeses" more important than others? Or is this really the point?

If I tell you that I am a biologist, does that make me credible...or do you want to see my college transcript? As an expert, can I be unbiased? When I was 8 years old, I saw a show by Harry Blackstone, Jr.---am I a reliable primary source (or did that have an expiration date)? Are primary sources always the best...why do we assume that eyewitnesses always relay the facts of an event? If a website hasn't been updated in several years about a topic that is old news (e.g., the death of Julius Caesar), is that so bad?

We set up safe searches for students...but we don't tell them the reasoning we applied when selecting the sites. In fact, I doubt we ever ask students to question us about that. Are we using school media resources because they were cheap? Because they came with a suite of stuff offered by a textbook vendor? How did we decide what was best?

I am, perhaps, overthinking all of this. But if the standard we want students to reach is a well-developed bullshit detector, then it would seem that we need to help them develop a (mental) flowchart, not hand them a checklist and assume they can come to a conclusion about what it's all supposed to mean.

Do you have a lesson you've used that helps students develop, and then consistently apply, a way to evaluate sources---something that gets at the deep thinking we need kids to do? Would love to see it!

19 May 2012

Three and a half years ago, I wrote about a district that was turning up standards-based reform all the way to 11. In the Brave New World, the Adams 50 school district planned to do away with grade levels and group students based on their level of proficiency.

Education Week recently profiled the district and its challenges with implementing this type of competency based approach (subscription required).

But four years into the effort, Adams 50's work shows how hard instituting such changes can be, even with broad support.

For example, the district divided its curriculum into 10 academic
levels, expanded those to 14 levels when 10 were deemed too broad, and
then has had to tweak the levels again when the state adopted new
literacy and mathematics standards.

The initiative also continues to be challenged by state testing
requirements, which force Adams 50 to group students by grade, even
though those students may not have been working on the particular
academic areas that the tests cover.

And incorporating the approach into the district's two high
schools has been bumpy. The competency-based model is currently being
used in the 9th and 10th grades in those schools, but Adams 50 is
working through what level-based learning means for grade point averages
and class ranks.

Even the most experienced teachers have been left feeling like
first-year educators in the wake of the changes, district leaders say
they have heard.

Adams might be one of the first districts to take this on, but many states are also considering a change to the way they look at credit.

While Adams 50 has gotten attention for its efforts, competency-based learning has a foothold in 36 states, according to a 2012 issue brief
from the National Governors Association. That means those states
"provide school districts and schools with some flexibility for awarding
credit to students based on mastery of content and skills as opposed to
seat time," the NGA brief said.

It went on to note, though, that a common challenge for many such
efforts is that other education structures within the state may work
against that flexibility. For example, student-level data may be housed
in systems that prevent teachers from getting all the information they
need to evaluate if a student has fully mastered the academic content.

States also vary in how much they encourage districts to take
advantage of such flexibility. New Hampshire is a leader in that area:
It is the only state that is requiring its high schools to do away with
Carnegie units, which award academic credit based on seat time, and
instead award credit based on mastery of course-level competencies.

Connecticut offers districts the ability to separate seat time
from credits, but recently, a coalition of district superintendents said
it would like to see the state embrace even more widespread change.
Every school leader in the state signed on to a proposal that, among
other changes, would require that students advance to the next level on
the basis of content mastery and would offer year-round learning
opportunities.

In the coming age of "anytime, anywhere" learning coupled with standards-based reform, it would seem to make sense to take a renewed (and long overdue) look at how we recognize learning. Seat time is no guarantee of learning. Neither is the number of days in a school year. Efforts to lengthen school days/years might move forward under the veil of increased opportunity, but they are still "one size fits all" models when it comes to mastering skills and content.

Systemic change like this is just about impossible, but I am impressed with these states and districts which are making an effort to move forward. Euphemisms about building the plane during flight aside, the simple truth is that we are never going to have all the answers we need at the time we need them. Sometimes, you just have to jump and trust that you can figure out what to do when you land.

16 May 2012

Last week, I participated in a workshop on data coaching. Along the way, there was a piece on measurable goals. I don't have a problem with including these sorts of goals. Even qualitative data is a form of measurement. But I couldn't help thinking that the term wasn't quite right. Decisions aren't made by data alone. We have to apply context to the measurement in order to make sense of things. I started wondering of evidence-based might be a better fit. Sure, it makes me a bit of a pedant to pick at something like this, but I think word choice is important...especially when it comes to something being packaged for school districts.

One of my favourite quotes I've run across during recent research has been this one:

While data may mean numeric information, the term evidence implies something that furnishes proof. Data become the mirror that reflects the evidence teachers and leaders use to make decisions on effective practice (Ruffner, 2008, p. 19).

When we set goals for ourselves, our students, or our schools, we may not be looking for measurement, in a traditional sense. If I have a student whose behavior is making me nutty, do I care more that the behavior stops (or is modified) or about a particular number of times s/he behaves appropriately? In other words, with some goals, is observation "enough"? When I cook, I make observations about temperature, seasoning, and so forth---I don't stick my thermometer in the flame or have a rating scale for saltiness. I could, but why? The evidence from observation is sufficient to get the job done.

Rick Stiggins has said, "Students should be presumed innocent of
understanding until convicted by evidence." We collect measurements
(scores) and observations about student learning, but in the end, it's
our professional judgment (based on this evidence) that allows us to
convict students of understanding. I think this same mindset could
easily be applied to other decisions in a school.

I do think that measurements for some goals are important. We track student scores as one way to look at learning...we monitor student absences...we need to know specifics about which students and families need additional services. But, being "measurable" isn't quite broad enough to describe how we view what happens in a classroom (or our lives). I would be far more comfortable with telling people to collect their data, but look at a broader base of evidence before making decisions.

Does this mean that sticking with the data---the measurement---is the better bet? I would agree that the way we look at data is coloured by our knowledge and experience...but I don't see any way to get away from that. Data don't interpret themselves---only humans can apply them. There is going to be some subjectivity. The key is to be aware of biases and reduce them whenever possible.

I chatted a bit with one of the leaders of the workshop. He said that he didn't think that there was any difference between a measurable goal and one that was evidence-based. What do you think? Are these the same or different? Is one a better descriptor for how we should develop decisions?

13 May 2012

Several years ago, I bought Classroom Instruction That Works (CITW). I was a seasoned classroom veteran by that point, and while none of the strategies presented were new to me, what I appreciated about the book was that it validated many of the things I had intuited over the years. My teacher prep program had not been very good, leaving me to use trial-by-fire methodology to learn how to teach. (My poor students, those first few years...oy.) The book was one of the first I'd ever seen that took educational research and presented it in an accessible way. I often shared bits and pieces from the information with parents and students as we talked about learning to learn---not just science, but developing habits for a lifetime of learning.

CITW has been a part of work I've done with beginning teachers---another lifesaver in the pool that they could grab as they started their journey. The book has also been a part of a few of the edtech programs I've been involved with, as we look at ways to integrate technology into instructional practices. For experienced teachers, it has served as a quick reference as they extend their skills into new areas.

You might have seen that there is a new edition of CITW. ASCD was kind enough to send me a copy, and while I'm sure the intention is that I would post a review right away, I really wanted to take the book out for a test drive first. This spring, I have been working with a few groups of rural schools around the state. I've been going out to them to help them engage with some after school PD. The schools take on a variety of forms---from the near one-room schoolhouse (which couldn't host another staff for PD because they didn't have a room big enough for 30 adults), to a district where the teachers are bused/carpooled in every day, to ones with a significant agricultural base (I had a teacher tell me she couldn't stay for the session because she had to help hubby fix the tractor), to ones with no math curriculum. CITW, coupled with educational technology, has been the basis for our sessions together this spring. Every group of districts was offered three sessions based on components from the book: Setting Objectives and Providing Feedback; Cues, Questions, and Advance Organizers; and Cooperative Learning. I'm wrapping up my road show for the year, so it seems like a good time to reflect on the second edition of CITW and its impact with teachers.

I think that one of the most powerful attributes of the book is simply that it isn't new stuff. Perhaps that sounds like a disadvantage or waste of time. What I discovered was that in the simple reminders of what constitutes sound instruction, it both validated what teachers were already doing (and made them feel good about it), and also allowed them to engage in reflection and deeper discussion. In other words, they might not have learned something "new," but this meant our time together could be spent thinking about professional practice in their classrooms and what they'd like to recommit to. Many of these teachers wear multiple hats in their school districts---you might not just be the fourth grade teacher, you may well be the principal and superintendent, too. The plain, but potent, ideas presented in CITW are the right sort of nag about the classroom. You know good questions are important...what reminders would be helpful before meeting with your students tomorrow?

Keep in mind that many of these teachers teach in isolation. If you're the only K - 2 teacher in your school, you don't get the opportunity to talk about teaching with other primary teachers. But CITW provided a common language base so that when these groups of teachers were in the same room, they could move their thinking forward during those 90 minutes: A little prompting from the research presented with time to talk about what works in their own classrooms. It's easy for a lot of us to forget about all the challenges small rural schools have. Teachers there are just as passionate about teaching and learning...and just as overwhelmed (if not more) by the responsibilities posed by their jobs. I won't tell you that a discussion of CITW (or any other book) will change their lives or solve the problems they face, but it does provide a connection with teachers in other places. I have been told that even when "outside" PD opportunities are available, many of these rural teachers do not feel comfortable attending because they think others look down on their job situation (i.e. small school = hick). To have a time and space to safely meet and talk with other teachers is a powerful opportunity for them.

As I've prepared for these sessions, I've had an opportunity to really dig into the second edition of CITW. There are several improvements over the original version, beginning with the simple reorganization of chapters. I like that there is some structure now for learning environments and supporting students to understand and extend their knowledge and skills. I think this would be especially meaningful for beginning teachers who are learning how to put the pieces together and when to leverage particular strategies. The research for this edition of CITW has been updated. For my work with rural teachers, I was able to use Google Scholar to create links to the new citations. Cooperative Learning is not a new idea or strategy, but what we know about how it works in the classroom increases all the time. Fresh eyes are important. I don't want my doctor restricted to using information from 30 years ago...I want him/her to keep current, even if the disease isn't new. I like seeing new references in the educational research. It doesn't mean the old stuff was wrong, just helps us extend what we know. While the first edition supplied more specifics about the effectiveness of each strategy, the second gives better ideas about applying the strategies, along with brief case study examples. This edition even extends the ideas into the realm of technological ways to demonstrate learning. For example, how might teachers and students use blogs in the classroom to reflect on learning goals and set new objectives? I won't claim that the edtech way is better than pencil and paper---you know me better than that---but I do like the acknowledgement that there are multiple ways for students to access and demonstrate learning.

I've had a lot of fun this spring getting to know and learn from teachers all over Washington. We'll also be having 3-day summer events where we will bring many rural educators together for some intensive discussion and work. These are fabulous events. It's so much fun to see them interact with others who really "get" what their classroom world is like. CITW will continue to be a part of the sessions this summer. We'll dig in a little deeper, make more connections with the work they do, and extend it into assessment, grading, and data use. This year, it's all about getting back to basics.

22 April 2012

I've been traveling a lot this spring--the Jenny Appleseed of PD. I've done a session in a mall...in a bookstore...in a conference room that seated 200...a first grade classroom...a school library in one of the smallest districts in the state. There have been any number of topics: standards and assessments; cues, questions, and advance organizers; reading strategies for digital texts; and so forth. If you tell me you need me to do a 20 minute session and the only space you have available is the janitor's closet, I can make it work. I do all right with presenting. I won't tell you that I hit my mark 100% of the time with 100% of the audience, as much as I would like to do so. But I like the opportunities, mainly because of the conversations that happen along the way.

Much of what I do would be considered "traditional PD," which is the current whipping boy of staff development. There are many excellent reasons to revile Sit-and-Get. I can't think of a single teacher I know who can't describe at least one horrific experience as part of a captive audience. Research shows it's grossly ineffective at changing teaching behaviors. It can be mind-numbingly boring. And yet, I think this form of learning still has a place in the arsenal. Why? Two reasons. One is simply that there are some teachers who like it. I routinely get feedback on sessions where at least one person says, "Please don't ask us to talk so much. I just want to sit and listen. " Fair enough. But I also think that there is more than one goal (change instruction) of this type of PD. You could argue that the sheer cost of teacher's time to attend a one-hour session gives very little return on investment if it doesn't result in instantaneous change. I would say that there is another purpose: to provide teachers with an opportunity to think and reflect. It's a brief window to feed a the teacher soul. You can inspire a bit of wonder and have time for discussion of one or two key ideas. Maybe that's all you need. Not everything in education is a problem to solve. Sometimes, we just want a reason to keep going.

I won't claim that traditional, face-to-face PD is more desirable than other varieties. There are lots of options for teachers through social media channels, "camps," informal workshops, online courses, job-embedded support, and more. All of them have a place. All of them will be a favourite of one teacher or another. I'm all for teachers being able to learn in an environment that supports them best. But what I see lost in all the shouting these days is that it should be up to each teacher to decide. I see plenty of tweets and blog posts deriding one form of PD or holding another above everything else. And what is lost in all of that is the simple truth that even "one size" (traditional) PD does fit some. It doesn't have to fit all. Ditto for unconferences. And PLCs. And Moodle courses. Seems like we should celebrate learning in all of its many forms.

15 April 2012

From: Health Person (HP)
To: Your Boss, HP's Boss
cc: You, HP's Friend, Person HP Wants to Join Her Gang, Person Who Once Attended a Meeting with All of You

Dear Your Boss,

Thank you for your follow up on HP Friend’s hand-written revisions of the raw data. She spent many hours sifting through the numbers to provide an accurate picture for the Summary of Findings report. It is disheartening and unprofessional that You threw the revisions away as it was a public document. HP's Friend was very clear at the meeting that she wanted them back for the team to use to move forward with this report and Your Boss assured us that the revisions would be returned. It is unfortunate that we were unable to make a copy of the revisions as HP's Friend had taped numerous sheets of paper together for ease of reviewing. These revisions were the only documentation that provided us with clear and accurate numbers for our report.

HP

You talk to Your Boss. You've completed all requested work on this project. (A) You weren't present at the meeting discussed and no information was relayed to you. (B) Public documents are the responsibility of the originator...and, as notes/drafts are not considered worthy of retention, they are only public record if the originator chooses to keep a copy. And, most importantly, (C) this is not the first time HP has publicly harassed you. (For kickers, keep in mind that HP has responsibility for K-12 standards against cyberbullying.) and (D) the giant printed spreadsheet would have been unnecessary of HP and HP's Friend had completed their personal responsibilities when the data was submitted (instead of giving it to an untrained secretary...who apparently did also not know how to use a xerox...but this is somehow all Your fault).

Your Boss says that s/he will talk to them. You talk to the HR department, who tell you to just ignore the junior high taunts, although they would be quite happy to call in HP's boss in for a chat. But hey, you trust Your Boss will do the right thing.

Only time goes by...and you discover that the problem has been made far worse, as evidenced by forwarded emails from Your Boss. Monday afternoon is your next big meeting with all the players. What do you do?

25 March 2012

This post is my 1600th to this blog. It's a milestone that I thought I would reach long ago, but I've been plugging along the last few years as other demands in my life have taken over. There's been a lot of change in this space over the past 7+ years, and I am always grateful for the way this corner of the Internet gives me a reason to pause, reflect, and reassess.

Atul Gawande was the keynote speaker this morning at the ASCD annual conference. Gawande is a storyteller...a man who follows his own questions and curiosities (e.g. How do we get good at what we do?). While I collected plenty of sound bites of wisdom in my notes throughout his talk, there was one idea in particular that I found myself thinking about afterward.

You see, I've been pondering how to "ride for the brand" recently. If you're not familiar with the phrase, it's a cowboy term that refers to your loyalty and commitment to the ranch/organization. When you sign on to work a ranch, you ride for that brand. Your actions should align with the goals and ideals of the outfit. To ride for the brand is about integrity.

Gawande used an analogy of Cowboys and Pit Crews to illustrate the type of values required in a school or other organization to be successful: communication, discipline, and teamwork. If he really knew anything about cowboys, he would have used "Lone Ranger vs. Cowboys" as his analogy, but I'll forgive him for the stretch because it doesn't change the values he details.

As an educator, do you ride for the brand? How many of us even know what the "brand" is for our schools and districts (and agencies)? I have no doubt that we each have our personal brand---the reason we get up and teach every morning. Does that make us Lone Rangers? What happens when our views don't align with one another, let alone the organizations we represent? If, as Gawande suggests, it is the smallest of adjustments that bring us closest to success, then how do we get there when we even haven't taken the biggest step to identify why we're on this ride.

And so, on the occasion of this 1600th post, I want to thank the readers who ride for What It's Like on the Inside. Whether you stop by and lurk, skim posts in your RSS aggregator, comment or add a link to your own blog, or think and pass along the ideas, I salute your discipline in continuing your learning, your efforts to communicate ideas, and the kind of teamwork that exists only in a web 2.0 world as we connect across time and space. I wish you much success as you ride for your brand, wherever that may be.

24 March 2012

I'm a virtual attendee for this year's annual ASCD conference. Today, I sat in on three sessions...starting at 5 a.m. (PT). On a Saturday morning. Hardcore, I'm telling you. But I appreciate the opportunity all the same. ASCD is my favourite conference. It is the only one I know of that celebrates such a diversity of ideas and people---all around a common goal of making learning happen for kids.

The first session, Creating the Opportunity to Learn: Moving from Research to Practice to Close the Achievement Gap (book) with Wade Boykin and Pedro Noguera, was phenomenal. I can't tell you that the main points were new, but I have rarely heard them presented with such passion and compassion. Some highlights from what they shared:

NCLB said--for the first time--that the problem is not the children.

There has been a normalization of failure. We have to change the culture around that belief. If you can't change the culture of a school, nothing will change.

We can't ignore social needs, but solving these will not mean that problems in education are automatically resolved.

Implementing something (longer school day, new curriculum) makes it easy to say that something concrete has been done; but, we need to make transactional changes between teachers and students.

We need high quality preschools. This is the biggest difference between the US and other industrialized countries. Invite preschool providers to all of your PD events. You will eventually have the children they serve in your classrooms.

We can find football players anywhere, because we are willing to cultivate that talent. We need to cultivate intellectual talent in the same way.

We are the professionals. The onus is on us to build the relationships with students.

The job of the educator is to create the opportunity to learn. The best teachers don’t expect students to learn the way they teach, they teach the way students expect to learn.

We punish the children with the greatest needs, often because we aren’t meeting their needs. The role of discipline is not to teach children to avoid punishment, but to do the right thing—even when we aren’t looking.

Children who don’t believe they can learn are the hardest to serve. If the children can’t do the work, then we are in the wrong work as educators.

We will know we are succeeding when race, SES, culture is no longer a predictor of achievement.

Toward the end of the session, Noguera made an analogy between homes with plastic covered furniture (only removed for special guests) and schools where students are kept from interacting with content and background knowledge. What are your schools like? Are they only for special guests? Or does every child feel like they can be at home there?

If you ever have an opportunity to see these gentlemen present, take it. You will not be disappointed with the way they make you think and reflect...and inspire you to make a difference.

Next up, Robert Slavin from Johns Hopkins and Success for All Foundation with a session on Tech and Talk: Multimedia and Cooperative Learning Team Up.

Not too much to share from this one. Most of the session was devoted to modeling a classroom activity. I have no beef with these sorts of things---it can be very useful to walk through how a situation in the classroom would play out. I do think the format would have been better served to explore it through an adult lens (not ask participants to play the role of students). In my experience, it's far more powerful to either see examples of teachers working (and reflect on those) or allow teachers to apply the information to their own context. Just my $.02.

Here are a few soundbites from the presentation part of the session:

Classroom technology doesn't do cooperative learning well.

Cooperative learning works well when there is a group goal and individual accountability.

To get the most out of technology, we must partner it with a classroom system that works: instruction, practice, assessment, celebration. Each step can be enhanced using embedded technology.

I selected this session because I'm working with some rural schools on integrating technology with strategies from Classroom Instruction That Works. Our next session is on cooperative learning and productive group work. I'd really hoped for some good things to share from this session...but I think I'll have to keep looking.

Although I'm sad not to be at the actual conference, I have to say it was a beautiful afternoon for PD. My view for the final session of the day:

Tide's out, so you can have a view of the shellfish beds.

The Olympics in the distance.

Okay, so moving on to the final session of the day: Finding Each Student’s Sweet Spot: Optimizing Engagement and Learning with Martha Kaufeldt and Gayle Gregory.

This was another presentation which didn't have a lot of new ideas to offer me, which is not to say that others wouldn't have found some good things here. A few of my notes:

Their definition of the Sweet Spot is a "combination of factors resulting in a maximum response with a given amount of effort." The presenters believe that by using brain-friendly strategies to reduce stress and increase engagement in the classroom, each student can find a way to create that maximum response.

I really appreciated their example of a visual agenda (as opposed to just text).

They also offer their materials in Spanish---the first presenters I've ever seen who have made the effort. Kudos.

My big takeaways from today:

Most presenters struggle with a large group. They don't know how to adjust their material for a large audience. Those who do are worth watching...and learning from.

I would love to give every slide deck I've seen today an extreme makeover. (Call me!)

A lot of people are parroting the ideas developed by others in their presentations, without adding anything to the conversation. Makes me appreciate those presenters who think deeply about what happens (or should happen) in a classroom all the more.

I'll be back tomorrow. For now, it's time to celebrate all of this good learning with a beer and something fun to read. Spring rains will be back tomorrow. Might be my last chance for awhile to enjoy a sunset like this:

23 March 2012

Personal circumstances have kept me from attending the ASCD's 2012 Annual Conference this year in Philadelphia. However, I will be engaging in the ultimate "Pajama PD" this weekend by being part of the virtual conference. ASCD is streaming two sessions during each of its workshop time slots, Saturday - Monday, and I am excited about sitting in on a variety of them. Here are the ones I am most looking forward to:

So, put on your comfy weekend clothes and join me here. I look forward to sharing what I learn with you this weekend.

Speaking of ASCD, I need your help. I just finished drafting an article to submit for an upcoming issue of Educational Leadership. I have someone lined up to help with copy editing, but I could use a couple of volunteers to give me some feedback on the content of the article. The topic is how to use data visualizations as feedback in the classroom. If you have time to read (~2000 words) and provide me with some reactions ("I don't get this part...", "Add more here...", "Take this out...", etc.), I can offer you my undying appreciation in return. (Wow! What a deal!) If you're willing and able, contact me at the_science_goddess[at]yahoo[dot]com or @ me on Twitter, and I will email you a copy. My deadline is pressing, so I would need your comments no later than Tuesday, March 27.

15 March 2012

I've been thinking about digital texts and sources---in particular their differences from print texts and what student skills are necessary to comprehend them. I think conversation about this is long overdue...and, unfortunately, too little too late. Some will cheer on the "disruptive" nature of mobile devices, but if we can't teach kids how to read with these...what's the point?

Most of the current arguments for replacing analog texts with digital versions are stupid, pandering to the lowest level of reason.

Print text is old. In general, the lack of updates is not a problem. Hamlet hasn't changed in hundreds of years...2 + 2 is still 4...and even in those areas which change (e.g. the sciences), almost none of the advancements are at an entry level for student learning. I agree that updating texts is infrequent...and recent mergers of book companies have done nothing to improve quality. But assuming that digital will automatically be better because "It's new!" is faulty reasoning. There's no guarantee that a digital text will be better written or designed than an analog book.

Books are heavy. Um, yes and duh. However, this says nothing about the quality of the content or their value to learning. If you hear someone in edtech say that the change in the weight of a backpack is the argument to sell parents on buying an iPad, run in the other direction. Anyone who insults the community like this deserves to be tarred and feathered. If you don't have a reason why the digital text will improve student learning, then stay out of the conversation until you have something intelligent to add.

Digital texts are more engaging and have more advantages than print alone. So far, the research isn't showing that having the ability to click here, there, and everywhere is positive for student learning. As we saw in the last post, comprehension of digital texts requires more skills than for print. While not an insurmountable or inappropriate issue to address, it's naive to assume that digital is just a sub for print. Meanwhile, distraction while reading is not a positive thing. (see Willingham's recent post on Ereaders and distractability for research.) Can I also say that I hate the implicit suggestion here that analog books are boring...or that we all had a shitty education because we were limited to print materials? (Poor us!) Finally, at this point, several informal studies indicate that college students might try digital books---but end up wanting print versions (also see Etextbooks: Students are not loving them). My interpretation on this is that (a) if you only know how to use a print book, you're not going to have all the skills necessary to transition to digital, and (b) digital texts currently do not come with features students want...just what designers thought they could do.

All this being said, digital texts and sources are going to play a larger part in the classroom from now on. Common Core State Standards, now adopted in 46 states, include the use of digital media---but not a single reading skill specifically related to reading online. Big. Freakin'. Mistake. And I would be willing to bet that not one of the states that has elected to add content to the standards has picked online reading skills as a piece.

So, what's a teacher to make out of all of this?

New standards will require students to be fluent with digital media.

Reading print and digital text requires different strategies. Don't assume that your students understand how to apply their reading skills to digital text.

Be intentional about showing students different formats (blogs, wikis, search results) and how to read them. Take the time to teach students to be critical users of this information.

Be aware that comprehension decreases with screen size (and reading time increases). The "Bring Your Own Device" revolution will not all be all unicorns and rainbows when it comes to student learning.

Digital sources will provide students with multiple ways to engage with content, but that does not mean they know how to choose or make sense of that information. Only you can help direct and coach them to success. New technologies do not negate the role and responsibility of the teacher.

I'm sure I'll come back to revisit this topic as more information becomes available. And, pending certain events in the next few weeks, I plan to make this a focus at next year's conference circuit. If you have resources to add, questions to ask, or criticism to consider, leave it in the comments.

12 March 2012

In the previous post, we started thinking about whether or not reading a digital text is different from a print text. (It is.) Part of that difference is due to how the reader interacts with the text. Print typically has a linear, page-by-page tour for the reader; digital texts require the reader to find the path.

Unsurprisingly, there are more differences. Some of these are due to the kinds of reading online environments for which there is no analog model. For example, think about a list of search results.

This is not a table of contents. It's a bit like an index, but the reader will never know the rules (algorithms) behind its generation (e.g., it's not alphabetical). Search results are their own unique type of reading. There's different colours, font sizes and emphases, data (dates, number of authors), descriptions. A lot of stuff. Let alone what happens when you merely reorder the search terms.

Think about all of the dynamic content associated with digital environments. The books on my shelves never shout at me with ads or have a list of the most related chapters along the left side of each page. When we move students to digital media, there is a need to explicitly teach these new types of text. Students can transfer the skills associated with reading itself---phonemic awareness, vocabulary, phonics---but that does not mean that comprehension is automatically developed.

Research in this area points to the need for more complex skills with digital text. Consider the table below:

In moving to digital texts, students need to apply a layer of comprehension skills in three areas: prior knowledge, inferential reasoning, and being able to self-regulate the reading process. The middle column of the table has a description of the ways print and digital texts are similar, with the righthand column showing the additional skills required for digital texts.

Further investigation into the strategies associated with comprehending online text have revealed that reading online is a lot like doing research:

New literacies of online reading comprehension are defined around five major functions: 1) identifying important questions; 2) locating information; 3) analyzing information; 4) synthesizing information; and 5) communicating information. These five functions contain the skills, strategies and dispositions that are both distinctive to online reading comprehension while, at the same time, appear to somewhat overlap with offline reading comprehension. What is different from earlier models is that online reading comprehension is defined around the purpose, task, and context as well as the process that takes place in the mind of a reader. Readers read to find out answers to questions on the Internet. Any model of online reading comprehension must begin with this simple observation.

---Leu, et al., 2007, p 10.

Each of these has a subset of associated skills, including identifying accuracy and bias. Many of these subskills are the kinds of things we say are important, but assign little time and attention to in the classroom (see Life's Little Mysteries post).

Comprehending digital texts also involves visual literacy: how to read charts, graphs, maps, photos, and other types of visualizations. While aspects of visual literacy have long been a component of informational text, in a digital environment where students can search whole collections of graphics and images (or easily create their own), there is an additional need to "read" items which may have no text at all. Inferential reasoning skills are critical here, just as they are with text.

Want to see some of these skills in action? Have a look at the Videos of Three Online Readers. The page is three years old (and unfinished), but the videos give a glimpse into how students navigate online resources and tasks. Audio quality is not good, but there is not much of it. At the beginning, the researcher goes over the task with the student, then you have 30 minutes of watching the student's screen as s/he completes the task. What do you notice about where students click and how they search? What thoughts come into your head as you watch ("Why did they do that?" or "Don't they know they could...?")? What might this mean for your classroom and the way you address the use of digital texts.

In the final post in this series, we'll take a look at the "So what?" We know digital texts are different...we know that they require new skills...but is that really such a big deal?

10 March 2012

I've always loved to read...and I have to admit that even if a lot of my writing occurs online, I am still an offline print aficionado. I still enjoy reading and collecting "analog" books. I look forward to the arrival of magazines I subscribe to. But, I also have a small digital book collection for reading on my phone, netbook, or tablet when I travel. I have lots of RSS subscriptions and do almost all of my news reading online. As an adult...and as a reader with good skills...I make choices about format based on my purpose for reading.

But what if you're a student learning to read? Does it make a difference if the material is in print or digital format?

I've long suspected that the push to digital texts was going to coming with some baggage about how we teach reading. Reading print is not the same as reading online...or so most adults will tell you, even if they can't tell you how it's different. After all, the basic mechanics of reading are the same. But there is something about the experience of being online that is not like the other. With the advent of Common Core State Standards (and their inclusion of digital texts and multimedia) and the increase of mobile devices (tablets, kindles, phones...), I've finally been poking around in the ed research to answer my questions and concerns: Is reading online really different...or am I just projecting my own experience? If digital text is not like print, how is it different...and what does that mean for teaching and learning?

First of all, I'm finding that this is not (currently) a robust branch of reading research. I do wonder if some of that is just the length of time it takes for peer-reviewed research to happen. With IRB approvals (we're talking young human subjects here), the journal submission and review process, and so forth, we're not talking quick turnaround times for research with devices that change every year. This makes me want to sneak into an International Reading Assn conference and see what people are talking about, but not printing. But beyond that, there isn't much in the trade journals. Even ASCD has an entire issue devoted to reading this month---and not a single article examines what is needed in an online environment or how to support kids learning to read and use digital text. However, we shouldn't interpret this lack of attention as meaning that there is no difference between analog and print; because the ed research that is available supplies a big fat "Yes, it is." to the question of whether reading a digital text is different for students.

For example, when you read a print item, your path is pretty linear. If you're reading a narrative piece, you read until you run out of words, then turn the page. With informational text, you might have more options. Authors will use text features (bold, italics, headings) to cue you about the organization of the text, important points, or graphics you should review. I used to always tell my students to remember that they were in control of their reading. Just because the author had "See Figure 2.1" in the middle of the sentence didn't mean that they had to stop and look at the figure at that time. Tell the text, "You're not the boss of me." and choose what makes sense for you.

For digital texts, this concept of the reader being in control is even more important. Digital texts---even narratives---are not linear. The reader can click on hyperlinks to take them to a dictionary or a related page/text/graphic. Visuals have a much bigger role in all digital texts. The research describes this collection of pieces as an "external text" that the reader builds while s/he reads. The path every person takes through a text---what they click on, which order they look at information---is different. In essence, each person will have their own experience with the text...their own version to work from and create as they read.

What does this mean for students? It depends. In the absence of specific instruction about digital texts, all of the possible outcomes have been seen. Some good readers of analog texts have a smooth transition to digital sources, but some do worse. Some struggling readers have better comprehension using digital text (even better than good analog readers), but some do poorly with both formats. In general, ELL students do not do well with digital texts, because the close-reading strategies we teach them don't work well in an online environment to create an external text.

Are there other differences between analog and digital text that challenge students? And can we adapt our teaching strategies to help students make the most of digital texts? Stay tuned for the next post on the skills required for digital texts. In the final post, we'll put together the "So what?" of using and teaching with digital texts.

27 February 2012

I woke up this morning thinking about all of the students who died when I was in school. When those events happened, they were sad, but somehow didn't seem out of the ordinary. Keep in mind, I had 59 people in my graduating class, so you can imagine what a small district we were. When I was in junior high, death came for the older brother of my friend across the street. He and three others had been out partying and were in a car accident. I remember that he hung on for a few days, but that was all. My friend and her family moved away after that. And then there was the little sister of another friend who was present when her boyfriend was shot at a party. Drunk teens didn't realize the shotgun being waved around was loaded. She ended up being spattered in the process. I don't think anyone went to jail. A friend of my family was in a car accident---everyone was sober, but she was an inexperienced driver on the highway and turned the car around in a not-so-safe area. She, too, hung on for a few weeks. I remember getting to be a senior and realizing no one in our class had died yet. But it happened to us, too. A new kid---I don't even know his name now or how he was killed---died a few weeks into the school year. There were others along the way, too...and I don't remember anything ever said at school by the adults. I assume they must have talked among themselves and decided to just keep moving forward for us.

It was early in my teaching career when I had to deal with the death of a colleague. I was teaching summer school...and the math teacher would go visit her parents in another town each weekend. One early Monday morning, she was ejected from her Jeep when she overcorrected after running off the road. It seemed so wrong to have to tell students that their math teacher wasn't going to be at school today...or ever again.

And there have been my own students, too. Last year alone, I lost three former students---two to illness, one to suicide (or as the obituary worded it, he "lost his battle with depression"). But I recall the few who died while in high school...getting to see the impact from a different side of the desk. As unpleasant as always.

I suppose that all of these things have been on my mind because last week, in the classroom of someone I used to work with, a little boy (appears to have accidentally) shot a young girl. They're in third grade, and their teacher used her first aid skills to save the girl's life. When you hear all the stories of violence at school---intentional or not---there is always some level of removal. It's another town...another state. Until now, it's no one I've ever known. Out of all the ways I've seen death touch a school, I've never seen it like this. I can't help but think of the scared children and this former co-worker in a situation she never thought she'd ever have to be a part of. I don't know how you find the strength to go back into that classroom the next day and be strong for your kiddos again, but I'm proud of her. I may not be able to do anything to help, but perhaps I can let her know that I'm thinking of her and wishing her strength and peace for the days and weeks ahead.

While there's no way to keep Death away from the classroom door, I'm afraid there will be a lot more times in the future where I wish I could.

25 February 2012

In the world of grading practices, there is standards-based...and hodgepodge. Hodgepodge grading (a term found throughout the research literature) refers to a score or final grade that represents more than learning. In other words, a teacher who assigns a grade that includes student "effort," whether or not the assignment was completed on time, neatness, or other factors in addition to what the student learned about the topic/subject/standard, is a hodgepodge grader. General consensus from the research is that this is not such a hot thing to do. Factors should be reported separately.

And then, there is this article which actually makes chicken salad out of these chicken sh..., er hodgepodge, grades. Here's the abstract:

Historically, teacher-assigned grades have been seen as unreliable subjective measures of academic knowledge, since grades and standardized tests have traditionally correlated at about the 0.5 to 0.6 level, and thus explain about 25–35% of each other. However, emerging literature indicates that grades may be a multidimensional assessment of both student academic knowledge and a student's ability to negotiate the social processes of schooling, such as behavior, participation, and effort. This study analyzed the high school transcript component of the Education Longitudinal Study of 2002 (ELS:2002) using multidimensional scaling (MDS) to describe the relationships between core subject grades, non-core subject grades, and standardized test scores in mathematics and reading. The results indicate that when accounting for the academic knowledge component assessed through standardized tests, teacher-assigned grades may be a useful assessment of a student's ability at the non-cognitive aspects of school. Implications for practice, research, and policy are discussed.

What's this all mean? According to the article, "25% of the variance in grades is attributable to assessing academic knowledge...and the other 75% of teacher-assigned grades appear to assess a student's ability to negotiate the social processes of school." Hodgepodge, indeed. However, "while administrators have indicated that they privilege standardized test scores over other forms of data (Guskey, 2007), little criterion validity has been shown for test scores as they relate to overall student school or life outcomes (Rumberger & Palardy, 2005), whereas teacher-assigned grades have a long history of predicting overall student outcomes, such as graduating or dropping out (Bowers, 2010)." So, if hodgepodge grades are better predictors for whether or not a student will finish school, why not find a way to use those to identify at-risk students?

And so the author (Alex Bowers) of the article did. And this, my friends, is one of the graphics:

I won't get into the nitty-gritty here---you can read the article for yourself, if you like. But basically, what you're looking at here is every student's grades for their K-12 experience, for an entire district. The students are sorted/clustered according to the patterns their grades make. At the top, the kids who are mostly red in the heatmap are those who scored well throughout their academic career...at the bottom, the ones who struggled. The funky brackets on the left are used to cluster students with similar performance, the width and length of the brackets showing degrees of similarity. On the right, the black boxes include data which is not grade dependent, but was considered worthy of consideration.

Now, what the researcher found out from doing this sort of analysis isn't groundbreaking: Kids who struggle in school (gradewise) are more likely to drop out. And even though I can't condone hodgepodge grading, what is important here is that this is the first attempt I've ever seen that gets away from hodgepodge analysis.

I think every piece of research I've seen (up until now) does its best to mash the data into neatly digestible bites---just like we tend to do with student grades. Educational research doesn't represent individuals as individuals, but as populations. We seek to generalize, because we feel we have to, given the amount of data we collect. But I can't help but wonder what we'd see if we looked at all of the educational data we collect at both the micro and macro levels: the trees and the forest, like the graphic above. We are slowly taking steps to move hodgepodge away from the classroom performance level. Will we---Can we---see it disappear from the research, too?

21 February 2012

In science education, we talk about student misconceptions frequently. We probe, reteach, and generally obsess about them. But the science classroom doesn't have a monopoly on how students make sense of the world around them. Earlier this year, I heard interesting stories from first graders about how the process of publishing a book. And while working with a group of librarians, I heard some other interesting tales. This time, however, they were about the Internet. For example, one student asked if the library had a printed version of Google. At first blush, this seems like a preposterous question. But if the library has a printed version of the Encyclopedia Britannica on its shelves...and a digital version available on the network, why not a dead tree version of Google?

But beyond that, kids struggle with selecting "good" information from the Internet. One librarian mentioned an interaction with a student who wanted to know if the web page he was looking at was good. She prompted him in the usual ways---check the domain...the sources. She ran through the litany with the student answering each one. And at the end, the kid asked, "But is this any good?"

This is not a post bemoaning the lack of information literacy in our students and how the world is going to hell in a handbasket as a result. Because that's very little of the story. It's what's happening behind the scenes that is far more intriguing.

Here is a graphic showing how information literacy plays out across grade levels and subject areas in Washington state. EdTech is on the left, which is where the concepts (un)officially sit...other standards on the right which have similar concepts.

The takeaway here is simply that nearly every subject area (except math, arts, and world languages) and every grade provides an opportunity for students to build and practice information literacy skills. As a state, what I see here is that we think these are very important. Perhaps more important than anything else we do, considering the formidable presence these critical thinking skills have in our standards.

But what we see in practice is a very different thing. We can blame the (over)emphasis of class time on reading and math...or RTI models...or a host of other things. Time is always the enemy in schools, but we have a choice about how we spend it. And what I hear most often, when it comes to students doing research, is that we do it for them. As teachers, we pre-load a page with web sites we think are the best options for a project. We choose which databases kids should access. We make the critical choices involved...and then wonder why kids don't know if a web page is any good when asked to decide for themselves.

I have been as guilty of this as anyone. In doing so, I've robbed students of the opportunity to struggle with research. The same struggle I had to go through, albeit before the age of the Internet. But I would never have built my base of skills had my teachers pulled books and magazines for me to specifically access...if I hadn't been given the time to practice and dig for myself.

I often remind people that we don't have to grade students on their critical thinking skills---but I do think we should assess them (even just informally) and provide feedback. If we say we value these kinds of skills, then the adults in the classroom have to put their time budgets where their mouths are. We can solve these little mysteries. And more importantly, we can help students solve their own.

18 February 2012

I looked back through my archives here at Ye Olde Blog. The first time I talked about "all of the responsibility with none of the authority" was in October 2005. A long time to be wrestling with that phrase. I've revisited the idea a few times since then, mostly out of frustration with the eunuch, er, unique nature of the work I've been asked to do.

I've learned a few things over the last few years, too. As such, my personal definitions are changing.

Authority should have something to do with experience. There is such a thing as being an authority on teaching first grade...or computer-assisted drafting...or gymnastics. That may not be how teachers view themselves, but perhaps they should. We are used to humbly acknowledging all of the things we don't know. It's true, we may not be experts at everything that happens in the classroom, but that doesn't mean there is not wisdom in our experience. We need to embrace that...not assume that authority and expertise are equivalent.

More importantly, position and authority are not the same---and this is one of most dangerous assumptions we make in education. Does a principal have the ability to hire a teacher or suspend a student because of his/her authority? Don't those (and other) tasks come with the position? I'm not saying that an administrator can't become an authority about their work---but we shouldn't presume that those are equal. Authority should be earned. For example, I know someone who was handed her position because she was already doing the work---why not assign the title to go along with it? Apart from the obvious problem that someone is now doing a job based on her own definition of qualifications, she's under the illusion that she has authority. Nope---just position. Authority is not something you are handed from above. It is a status given to you from those your position serves. And every time I see this person in queen bee mode, all I can think of is her lack of authority. She might get respect for her position, but I have yet to meet anyone who respects her authority.

And responsibility? I think this one has changed for me only in terms of what it includes. I'm responsible for doing the best quality work I can do. Every day. That work touches a lot more people than it used to, but the model is still the same.

"All the responsibility with none of the position" seems like such an imbalance to me now. Either there's too much responsibility or too low of a position. Responsibility may be acquired with expertise, but you can never make yourself an authority. I don't want to forecast what the future of these terms will be...how my next steps in life and career will influence the way I see them at play in the world. I know they will continue to evolve as I learn.

15 February 2012

Did you know that you can use Microsoft Apps to embed a Word, PowerPoint, Excel, or OneNote file on a web page...or blog post? All you need is a free SkyDrive account (learn more here). These slides are a few of the ones I used in a presentation last year for ASCD.

I know that there will be lots of people who think "Who cares?"...and they may well be right. Microsoft, as always, is way late coming to the party. Unfashionably late. For several years, now, people have been uploading decks to SlideShare, SlideRocket, GoogleDocs, and other sites which will generate embed codes. But I think the advantage with Office Apps is that the files keep their native format. There are a few limitations, but basically, a slide deck like the one above, remains the same. I would say that the weirdest part of the app is that when it is in embedded in a web page, none of the animations/transitions work...but if the viewer clicks the "view full-size presentation" button in the lower righthand corner, all of the animations/transitions show up. Try it---you'll get a very different look at things. Overall, however, the viewer can see it as was originally presented. I'll take that any day.

If you want to see a sample spreadsheet, head on over to Excel for Educators, where I'm showcasing a data analysis tool. Excel doesn't make the jump quite as nicely, but it's the best effort I've seen so far. Like the PowerPoint app, the embedded file will not show up in RSS posts. The viewer/user has to visit the actual page.

So, teachers, keep your files in their "native" formats---and get them posted to your class web pages, Moodle sites, and other spots with less hassle and no file conversions.

11 February 2012

Some experiences recently have made me remember this clip from Ali Baba Bunny (1957):

The greedy exclamations have not been about material wealth, but about about power over content.

I've been watching from the sidelines as a micromanager and her minions attempt to exert control over digital content. And I'm struck by how daffy the entire approach is. First and foremost, if you're a government agency, you don't really "own" anything. Public funds create public documents and media. You can claim copyright all you want, but a Creative Commons approach is far more appropriate. (And guess what, teachers? Your lesson plans may soon fall under those same guidelines.)

However, this is the lesser sin. What these people don't understand is that in the digital age, once you're created something and posted it, you no longer get to decide how people use it. I'm not talking intellectual property here. I'm talking purpose. You can make a video showing how to score an assessment item, but that piece could be used in multiple ways. Perhaps a pre-service teacher uses it for PD. Maybe a group of teachers in a school watch it to build background knowledge before scoring their own students' work. Administrators might want to watch it in the context of interpreting annual test scores. A "creator," can advise about how an item is best suited, but the "user" trumps all. And you know what? Just because you don't post it doesn't mean people won't find similar content elsewhere. You can jump up and down on Bugs Bunny all you want, he's not going away.

But more than that, they've gone Daffy. When I look at something I develop for work, I don't think "Mine!" I look at the item as something to share...something I hope educators will adapt and use and improve upon. It's not mine: It's ours. I might be the custodian, but the content is curated on behalf of everyone. It is not up to me---and should not be up to me (or any other single person)---to choose for everyone else. I don't understand the drive to control every last drop of information. But perhaps the difference is that in a room of people, I don't assume I'm the smartest. I'm there to learn and listen. The Daffy group are convinced they know it all. They're there to talk and silence everyone else.

Beyond that, this group assumes that because someone shares something with them, it means the Daffy's now get control. Um, no. Sharing means that you're invited to participate---not be Columbus. As a guest in the process, use some manners. If all you know how to do is control, not collaborate, then don't be surprised when you aren't asked to the dance anymore.

I feel that way about my blogs, too. I put things here because I want to share them. I'm not interested in making people read them the way I intended. I share my journey as you follow yours (and many of you are kind enough to share your own). After years of blogging here and working on behalf of teachers, I can tell you that it is far more humbling to share. And there are always ideas or experiences I want to keep private. We're all allowed.

In the end, I find myself wondering how people learn to shift their thinking about sharing in the digital age. How do you help people move away from seeing their (publicly-developed) content as something which should remain a virginal daddy's-girl to a healthy and active part of a community?