Sunday, February 25, 2007

This is one of those lists that looks so good on the surface but is really nothing more than slogans carelessly applied. I don't mean to be so critical (and would rather not be) but really would like to draw out some of the problems of this list.

The way it is: Teachers lecture - students listen The way it will be: Teachers guide - students do

Does this mean: 'teachers tell students what to do, and students do it?' And if so, is this any advantage over lecturing?

It is always fashionable to complain about lecturing, but this does not automatically make the alternative (whatever it may be) acceptable.

In my view, the issue has nothing to do with the form and content of the educational offering. Lectures have value when used appropriately. Rather, the issue has to do with control. Lecture at me when I'm not interested, and no new information reaches my mind.

The proper approach here is to make learning available, in whatever form is desired and appropriate, to assist students as they do what they choose to do.

The way it is: Students work alone The way it will be: Students work in groups

I hated groups. Hated them. Despised them.

Groups aren't about some better way of learning, they're about conformity, power and control.

What is so *bad* about working alone? This doesn't mean that I am isolated, without resources or support. It merely means I don't have somebody telling me what to do, taking credit for my work, and excluding me if I don't conform to their rules.

There is nothing sacrosanct about groups. Certainly no particular learning advantage is gained by forcing students into groups.

The proper approach here is to allow students to form groups if they want to (in other words, to redefine 'cheating') but to allow them to work on their own as well.

The way it is: Subjects are departmentalized The way it will be: Subjects are integrated

I agree. But what does that mean.

If it just means that the math problems now include examples drawn from the science curriculum, then no real advance has been made.

It is as though only two alternatives are envisioned: subjects taught apart, or subjects taught together. That's certainly what the wording of this item suggests.

The real alternative is, of course, subjects not taught. Rather, students engage in real world activities (which may include the solving of problems, but is not exclusively problem-based). As they engage in these activities, learning (that might correspond to 'subjects') becomes available to them.

The way it is: Curriculum fact centered The way it will be: Curriculum problem centered

No.

First of all, there are domains of learning that do not involve solving problems. Art and other forms of creativity, for example. There is no known problem that is solved by the Mona Lisa. Or by the Beatles. But the world is the better for them.

More significantly, a problem centred curriculum is still a curriculum. It retains that idea that there is some One Way that will work for all students. But this is no more true of problems than it is true of facts.

Finally, students are still going to need facts, or perhaps more accurately, some way to get facts. This is merely obscured by problem centered curricula, which imposes a layer of obfuscation between students and learning.

The way it is: Teacher primary source The way it will be: Rich resource environment

Yes. But it should be worded 'resource-rich environment' because it doesn't matter whether or not the resources themselves are rich.

The way it is: Primary print medium The way it will be: Variety of media

If print is the primary medium, then why is it necessary to harangue against the lecture (as above) which is primarily an *oral* medium?

The *real* distinction here is between language-based and non-language-based learning resources. Multi-media seems to offer a non-linguistic alternative. But of course, a lot of multi-media is intended to deliver text in other ways. That's what podcasts are about.

More significantly, though, is the question of *why* we need to use non-linguistic resources. Normally here we might get some story about the same information being transmitted using multiple modalities.

However (and I write this while listening to some jazz guitar) what is the message delivered by genuinely non-textual modalities? How is it the same as, say, some sentence, or even some concept?

At the very least, if you get this far, you need now to be asking questions like: if knowledge and learning are not textually based, then what are they? If what we learn does not depend essentially on language, then what does it depend on?

You can't just eschew text without knowing where you're going. Which is why people say that they don't want to use print, and continue to use print.

The way it is: Success = tradition The way it will be: Success = accountability

I am not sure where it has been the case ever that 'Success = tradition' (though I applaud the ambiguity of use with the '=' symbol, a vagueness that ought to be celebrated for looking like it means something while remaining meaningless).

'Success', more accurately, has traditionally been represented as the ability to recite, on demand, relevant facts and information. And in some cases, to solve certain types of (mathematical and linguistic) problems. As, say, defined by the structure of such measurement instruments as the SATs.

So, now, how is 'accountability' different from this?

Turns out, it isn't.

The *real* tension here is in whether the measurement of 'success' is curricular based or not curricular based. Whether you are measured against some sort of academic standard, or something else.

In my thinking, 'success' is measured by 'something else'. Where the 'something else' means leading a good life, whatever you think that to be.

By any curricular definition, by any measure of 'accountability', people like Bill Gates, Steve Jobs, and many others, are failures.

It's actually pretty easy to recognize success when we see it. In a community, we can tell whether the educational system is successful by the lower crime rates, better health of the citizens, inventiveness and creativity, and the like.

Of course, that's a lot harder to measure, and the politics of accountabulity won't be satisfied by these real measures. Hence the demand for pointless testing of irrelevant 'knowledge' based on 'testing'.

The way it is: Schools are insular The way it will be: Schools are connected

We all have some sort of intuitive idea of what it means for schools to be connected. But if we push these intuitive ideas, we find that they pretty much fall apart.

They fall apart because it's not really true that schools are insular.

They are connected in myriad ways. They are governed by a school board, share in a state or province-mandated curriculum, have teachers represented by a board-wide teachers' union, have ties to the community through the PTA and other such associations, compete against each other in academic and sports competitions, and, of course, can contact each other through the mail, by telephone, online, and more.

So, in this environment, what could the author possibly mean by saying that schools should be connected?

Probably something like the pairing or twinning of classes, shared classes, co-teaching with teachers from other districts, online classes with students from multiple schools.

Things that, in general, constitute a shift from the class as something that is taught by one teacher to the class as something that is taught by multiple teachers. No?

So now I ask, well why is *that* good?

But - of course - the problem is in the statement.

It should not be 'schools are connected'.

It should be '*students* are connected. And even 'teachers are connected' (though the union already supports that, and should play an even larger role).

Of course, if the principle is that 'students should be connected' then the school is no longer so central as it was. And we have to ask, what was it about the school that made it so central in the first place?

The *school* is what keeps students separate. The less the emphasis on the school - the less, for example, that the school demands of students in a given school day, the less the school blocks and filters conten from outside the school - the more students can and will connect.

Magically. Without the help of teachers or schools.

The way it is: 3 R’s (Rote, Restraint, Regurgitation) The way it will be: 5 C’s (Children, Computers, Communication, Creativity, Collaboration)

What teachers and what schools are genuinely willing to give up on rote, restraint and regurgitation?

What is 'curriculum' other than rote?

What is a 'school' other than restraint?

What is 'accountability' other than regurgitation?

I think that when most people read these, they will nod in agreement but they will be thinking of the 5 Cs *in addition* to the 3 Rs.

Look at the last four: 'Computers, Communication, Creativity, Collaboration'.

Are children going to be *required* to use computers, to communicate with each other, to be creative, to work in groups?

Will the school be the place where they go too use computers and to talk to other students? Will the school be the place where they find the materials and tools to be creative? Will the school be the place where they are required to work with other students?

Will they need to show what they know about computers, perhaps by answering questions on a test? Will they prove they have communicated with others by answering questions about them? Will 'creativity' be defined as something safe?

If it was *about* children, then they would each get their *own* computer, with which they could communicate with others - children and otherwise - as they *choose*, with which they could create software or art or literature as they desired, where they could work collaboratively, collectively, or without others at all.

In summary...

My reading of this list is that although it looks like it is on the forefront of advocacy for change, it disguises a a sentiment which is at heart fundamentally conservative. It offers the illusion of change, without actually promoting change.

I don't know whether this was the intention of the author, and I won't speculate on motives. It feels to me like a well-intentioned post that was simply unable to move outside of a comfort one.

But the reason why I felt that ti was important to comment on this is that it is characteristic of a lot of recent writing that I have seen in the edublogosphere that walks and talks as though it is at the forefront of something new but is in reality an effort at retrenchment, an effort to protect one's own turf while embracing the chance swirling around it.

The recent 'School 2.0' movement is a good example. By locking into the concept of 'school' the proponents, while looking for all the world like they are enbracing change, are in fact freezing the state of education into an archaic past, where the school is the centre and where everything else - including the students - revolve around that central concept.

The idea of 'school 2.0' by definition eliminates as out of scope any concept that reduces or eliminates the importance of the school (and by extension, the elements that constitute a school, such as classes and curricula, teachers and lessons).

Given that the the shift in focus from authority (such as schools) to empowerment (such as for students) is at the very core of the whole concept of '2.0' the idea of 'school 2.0' is inherently self-contradictory. It stands for the very *opposite* of what its public posture presents.

That's why I posted this response. The future is much more difficult to grasp than a mere set of slogans. Fundamental values are shifting under our feet. Pretending it's something superficial, as represented by this list, won't change that. It is important to have an accurate representation of the issues, so people can genuinely understand what they are facing.

"Do we believe in rigor and passion in our own educations? It's a hard message, but if our free time is filled with unchallenging and mindless entertainment, and if when we talk about our school days we speak of something that is behind us that "we got through," then our children will not know any better.

"When our major method for accomplishing something is enforcement (which is really what the culture of school is now), we give the implicit message that it is not something that is going to be enjoyed, no matter how much we say otherwise. Want to help your child become a better learner? Let them see you studying math or reading a classic..."

This comment is exactly right.

Neither of my parents were academics. Neither attended university (except some night classes my father took at Sir George Williams). But in our household, academic virtues were celebrated and practiced:

- the radio was tuned to CBC (Canadian public broadcasting) and so we would hear world news, scientific programs, 'Ideas', and more...

- there were always newspapers in the house - we all ended up delivering newspapers - and articles of importance, such as the current membership of the Cabinet (in the Canadian government) were posed on the wall. There was always a big map of the world on the wall.

- my mother bought a complete set of the classic works of literature for the house (these were very specifically my mothers, and I had to ask to read them), very cheap Pelican's (low-cost Penguins) that fell apart when you read them. Everything from Shakespeare to Butler to Thoreau to Twain. I read about half the 120 book collection before the middle years of high school (talk about an advantage!)

- I joined the Book of the Month Club with my father, through which I learned a lot of history - Pierre Berton, William L. Shirer, and Albert Speer all stand out, as do my Complete Sherlock Holmes

- there was also technology, and an evident interest in technology, in the house (we weren't just about academics). Our house radio was built from a kit. Bits and pieces of telephones were always about, as my father worked for Bell. We had telescopes and microscopes (much to the distress of the local bug and amphibian population). My younger brothers benefited from my father's interest in computers, but by then (1980s) I had left home. Still, I got my first model, a big 300 baud box, from my father.

- somehow I came into possession of an old Underwood typewriter (the reason I can't type to this day, because the keys took too much force to push) and a limitless supply of paper.

- I also somehow had access to tools - hammers, saws, screwdrivers, the works - to build things (and we built numerous things, including clubhouses, tree houses, go-karts and even a stage coach).

- we had a (large) garden and learned how to grow food. We were involved in preserving and canning the food (I can still remember piles of beans, a supply that would last the entire winter). We could cook basically whenever we wanted, so I took the opportunity to bake some cakes and pies.

Things like this - which, really, began with my first set of blocks, which had letters stamped on the sides, characterized my childhood. Knowledge and learning were always valued and supported.

At the same time, though, none of it was forced on me. These things were always in my environment, but I wasn't required to read the books (though the garden work was not voluntary - everybody helped because everybody ate). It was all about the environment, and not some rigorous academic regime.

Wednesday, February 21, 2007

As Clive Shepherd writes, "cognitive neuroscientist Dr Itiel Dror of Southampton University. Itiel is becoming a bit of a celebrity amongst the e-learning community in the UK as someone who avoids the grand theories of learning and concentrates instead on practical tips based on what we know about the brain and how it works (assuming we really do and this I must place on trust)."

Of course, the sceptical side of me says that this is something like saying that so-and-so can tell us best how to win an auto race because he's a mechanic. There is, indeed, a distinction between knowing how something works and knowing how best to use it.

Reading through the points in thsi summary, they seem sort of right, but not exactly right. Let me clarify them.

The brain is a machine with limited resources for processing the enormous quantity of information received by the senses. As a result, attention is extremely selective and the brain must rely on all sorts of shortcuts if it is to cope effectively.

My response: no criticism of this; it seems to be about right.

Teachers/designers can adopt two strategies to reduce the risk of learners experiencing cognitive overload: provide less information (quantitative approach) or take much more care about how this information is communicated (qualitative approach).

Well, you see now, this approaches the problem not from the nature of the brain but rather from the nature of the information. And when we look at the information as nothing more than a pile of stuff to be processed by the brain, then sure, these are the ways to deal with it.

But the other way to look at it is to, as promised, look at it from the perspective of neuroscience. What does the brain do in cases of cognitive overload? This is important because, if we know how the brain will adapt, we know how to shape our information (if at all).

This is the subject of the next few points, so I'll continue.

It is easier for a person to focus their intention on the desired point if there is minimal noise (other information) surrounding it. Reducing noise also reduces context, so a balance needs to be struck.

I assume he meant 'attention' and not 'intention'. In any case, I'm sure there are all kinds of tests proving this, but I will point out that the nature of the subject is a much more significant variable.

People are able to focus on things even in the most extreme of circumstances if they are sufficiently interested. That's how you can have kids playing video games even while the house is burning down around them (I guess that's the sort of 'context' that would be important). By contrast, if you aren't really interested in what you are doing, the least amount of noise distracts you.

We know this because (as was just stated above) we know that the brain is extremely selective and filters out stuff that isn't important.

Perspective matters. From the teacher's point of view, the content (lessons or curriculum) is constant, while the level of background noise is the variable. From the point of view of the learner, however, the content is also variable. That's why you get two very different interpretations of the same phenomena.

Overload can be reduced by grouping items/steps (what Itiel calls 'chunking'). Grouping can be accomplished by placing people/objects/events into categories, or by compressing a number of procedural steps into one, automatic action. Visually you may separate items by space, size or colour. Learners will naturally employ grouping as a strategy, although they may do this inappropriately and the process requires effort. Better for the designer/teacher to present material ready grouped.

This is a good strategy and one I have recommended elsewhere to help people write academic essays easily and proficiently (and without notes, but I digress). I find it interesting, though, that he used the DE design term 'chunking'. Maybe reading something other than neuroscience?

Yes, there are different types of groups. Groups that make sense conceptually, especially if linked to a larger framework, are better (I would add that colour is rarely, if ever, a part of such a framework).

But is it better for the teacher to present the material already grouped? How does that follow? If the intent is to have the student learn the information (ugh, bad terminology) then we must ask, is it the groups that aide remembering and understanding, or the process of grouping that does this? If it's the latter, then presenting the information already grouped may help the teacher remember, but will do nothing for the student.

Because, as I noted above, it is better if the groups align with a pre-existing conceptual framework, it is better then if the student does the grouping, because that way the process allows the student to connect, in an organized way, new knowledge with existing knowledge.

A side effect of grouping is that once the action is completely familiar (that old 'unconscious incompetence' phase), the individual finds it hard to explain how they do it; they lose control over the process because it has become automatic (so old hands may not always be the best teachers?). Grouping is essential to our functioning, but there are obvious dangers, i.e. unhelpful stereotyping.

Here there seems to be a confusion between grouping, as in the sense of classifying different perceptual entities into types, and grouping, as in the sense of combining several activities into one. Now this isn't necessarily bad (I have said elsewhere that learning to read is similar to learning to ride a bicycle) but can be very misleading unless carefully explained.

I think it would have been better to present them separately.

There are mental processes that can become automatic. Add 1+1 for example. One of these processes is 'categorization'. You look at a bunch of things and automatically associate some with the others, based on habitually formed patterns of association. In some cases, such as grouping people by colour, this sort of automatic association can be inappropriate.

There are also mental processes that constitute sequences of steps. The steps involved in a logical derivation, for example. So a process that actually involves multiple steps may be performed by an experienced logician as though it were only one step (I called this 'skipping steps' in logic class and complained bitterly about it. "It's obvious," said the professor. "Whaaaa?" I responded).

These are very different phenomena that are essentially the result of the same neural process but which instantiate very differently and need to be approached very differently. Kind of like the way the steering used to recover from a spinout may be exactly the same as the steering required to navigate a hairpin curve. Sure, it's the same motion. But you would describe the two events very differently.

Individuals use top-down processing to reduce overload. This draws automatically on their past experience of the particular context, existing knowledge and intelligence and avoids them having to evaluate all new information from the bottom up. An example would be how people can easily read a sentence in which the letters in each word are jumbled up.

Yes. But...

This is not 'top down' processing as traditionally understood.

There is a very large difference between inferring something on the basis of similarity to a prototype (that is, apttern recognition), and inferring something based on a general principle or rule. By 'top down' we typically mean the latter. But when describing character recognition, as in the example, we are describing the former.

I would also be wary of building the (Darwinian?) intent into the process. People use pattern recognition. It reduces information overload. But it is not necessarily true that people use pattern recognition in order to reduce information overload. People use pattern recognition because that's how neural networks work. Perhaps evolution directed us in this way, perhaps it did not. Either way, our use of pattern recognition in a particular circumstance is not caused by some such intent. It occurs naturally, as though by habit.

Designers/teachers need to take account of the way in which the information is likely to be encoded and processed - it's not 'what you teach' but 'what is learned'.

Except... it is very misleading to say that it's 'encoded'. Otherwise, yes, there is a large distinction between what the teacher teaches and what the learner learns (which is why information-theoretic and transmission-theoretic theories of learning are wrong).

Different parts of the brain specialise in different tasks. Individuals can engage in more than one task at the same time, as long as each uses a different part of the brain.

Of course, these parts of the brain form dynamically and according to experience and circumstance, so there's no telling in advance, or in general, what processes occupy the 'same' part of the brain and what processes do not.

That's why I can read and write while listening to loud music, as I'm doing now, while my father couldn't.

It's a myth that we only use 5-10% of the brain - we use it all.

Correct.

The brain continues to change throughout our lives, even though we stop adding new brain cells in our early 20s. Some parts of the brain are relatively hard-wired (through nature or nurture), some very plastic. It makes sense to concentrate in recruitment on finding those people with hard wiring which suits the job, because no amount of training will sort the problem out later. (Itiel did not go into detail about those capabilities which tend to be hard-wired and those which are more plastic - this is clearly important.)

The main part - that neural nets are plastic - is true and important.

It is also true that some parts are pretty much hard-wired -- good thing, too, or our hearts wouldn't beat and our eyelids wouldn't blink.

But as to how much this carries over into learning or into life skills - this is very controversial. I can certainly agree that there are people with currently existing wiring that may be more or less suited to the job. That's no more controversial than saying people learn different things. But to say that these capabilities are hard-wired is much more questionable.

As you grow older the hard-wired capabilities persist - the most learnable capabilities go first.

This is demonstrably false. For otherwise there would be no incontinence in old people.

Language is more than just a means for expressing thought - in many ways it is thought. If a person is not exposed to any language in early years, then by the age of seven they are incapable of learning it.

I doubt that this is uniquely true of language - it is probably true for any pattern set. Can a person become fluent in mathematics despite never having been exposed to numbers. Can a person become a musician having never been exposed to tone and melody?

People who have not specialized in the nature of language typically take language as a given - some sort of folk-psychological representation of Chomskyian generative grammar. And then suppose that this then must be the nature of thought.

Even if language is thought - which i would not grant for a second - we still know nothing about the nature of thought if we do not agree on the nature of language. Which we probably most emphatically do not.

The two sides of the brain really do have different functions (I thought this was just pop psychology). The left brain concentrates on language and analytical skills; the right has the spacial abilities. The left side of the brain controls the right side of the body and vice versa. The left and right sides of the brain do not interact physically.

They do interact, through something called the 'corpus callosum'. But yes, the two sides of the brain do specialize, as observed. That said, to my knowledge, this specialization is not hard-wired.

The size of a person's brain is not an indicator of intelligence.

Within the normal variation of human brains, that is. My poor cat, with her cat-sized brain, will never reach human intelligence. But she's still very cute.

20% of your blood is in the brain.

Which stresses the importance of nutrition to brain function.

You never lose anything from long-term memory, just the ability to retrieve it. Retrieval is a function of how you encode memories / the number of links you provide.

Well, yeah, in the sense that the connections that constitute the long-term memory are basically permanent, more or less. But you don't 'retrieve' memory the way you retrieve a book from the bookshelf (even though it feels that way).

'Retrieval' (properly-so-called) is a case of pattern recognition - and the less salient a pattern becomes in the mind, the less likely it is to be associated with a current perception.

Working memory consists of 7+/-2 items (again I thought this was pop psychology).

Yes. And there's that 'grouping' again, in a third guise. We can remember things that are more than 7 words long by recognizing them as coherent patterns. That's why I can remember something like 'Turn right at the third light and then left at the second stop sign, then go four blocks' even though it consists of 18 words (and 70 letters). or 1-800-857-2020 even though it's 11 digits long.

What we put into working memory first depends on pattern recognition.

To reduce cognitive overload, take out every word or picture that is not necessary or relevant to your learning goals. Even then, don't deliver more than the learner can handle (presumably by modularising the learning).

This is effective only at the very gross level, and not particularly useful as you get into finer details (and more precise definitions of 'necessary').

I have seen studies, for example, showing that a slide show with bullet points is more easily remembered than a slide show with the same bullet points and animated graphics.

I expect, though, that the placement of the 'NRC' logo in the corner of the same slides would not have an impact either way.

I also expect that the removal of 'unnecessary' letters in the bullet points would actually hinder memory. Fr exmpl, rmving mst f th vwls. Thy r nt necsry in the snse that we cn stl rd the sntnce, bt they hlp wth the grping.

So - better advice would be something like - present material that accords (perhaps with some cognition) with patterns that will already be familiar to the learner.

Provide the learning when it is needed, not before.

Sure. But why? This isn't determined by whether it is necessary or not, but rather, by whether it is salient or not.

Be consistent in the manner of your presentation, e.g. the interface.

This can actually be distracting, taken to extremes. That's why documentaries switch from having a person talk to showing some nature scenes with a voiceover to interviewing some other person.

Be consistent in the level of your presentation, i.e. not too complex, not too simple. Try to work with homogeneous groups; better still personalise the learning.

Yes, but again, why? I would argue that this facilitates pattern matching .

Engage the learner by grabbing their attention, allowing them to determine their progress, providing constructive feedback, introducing an element of excitement/surprise.

Again, this doesn't follow from the presentation above. What has happened here is that some hackneyed (and vague) pedagogical tips have been attached to some discussion of neural function, without a clear linkage between them.

Be careful of allowing the learner too much control over the learning process if they don't have the metacognitive skills, i.e. they don't know what they know and what they don't know, nor how best to bridge the gap. Ideally help learners to increase their metacognitive skills, i.e. learning how to learn.

This has utterly nothing to do with the brain theory discussed above.

If the content has more to do with attention than, say, distraction, then taking control away from learners, even in areas where they do not have skills, may cause more harm than good.

And what are the metacognitive skills. What is 'learning how to learn', for example?

Providing the learner with control over pace and allowing them to go back and repeat any step is important.

Same point.

The learning benefits by being challenging. Performance targets, rewards and competition can increase the degree of challenge, perhaps through the use of games.

And again, same observation. This doesn't follow from what has been stated above. Sure, it's good advice (what Seymour Papert and James Paul Gee call 'hard fun'). But why is it good advice. What else can we learn about this piece of advice. What kind of games, for example (see Aldrich on this).

Anyhow, those are my thoughts based on this reading of Shepherd's article. I also read a bunch of Dror's publications online and certainly have no quibble with his neuroscience. I just think that the study of teaching and learning involves more than just neuroscience, and that there are areas of complexity and potential confusion Dror may not have considered in his work.

Tuesday, February 20, 2007

Preparing a Slide Presentation for a talk tomorrow, I decided to use the OpenOffice slideshow generation tool instead of PowerPoint, which is what I usually use.

Opened it up, went to the outline view, because that's how I like to compose presentations. Text is huge. Why? Because it's the same font as the slide. What's the point of having an outline view if it's the same as the slide?

But I write an outline, then go to prepare the individual slides. I like to have text boxes. 'Insert text', right? No, the command is not there. There's no freaking way to insert text. Moronic! Why remove this feature?

Well, maybe there's some other way. Check the web. Nothing. Oh, maybe check OpenOffice help. Type in the search term. Huh? Oh, it has defaulted to the contents view. Hit the tab for search, type in the term again, hit enter...

Letter to the Editor in response to this article in the Globe and Mail (no comment area was available).

It is irresponsible journalism to merely repeat poll results with no analysis or criticism.

This is especially the case when the response to the question "How would Canadians vote if an election were held today?" is misinterpreted to read "How would _you_ vote if an election were held today?"

The posing of such a question in such a misleading manner casts the reliability of the survey into question.

Additionally, the article depends for most of its rhetorical force on an uncriticized interpretation of the polling questions.

Why should we believe Allan Gregg that these numbers create "winning conditions" for the Tories?

Why, for example, should we believe that Harper's "best leader" numbers are more important that the number stating that more Canadians identify with the Liberals than the Conservatives?

For that matter, the use of such vague phrases as "identify with" and "best able to manage" is sloppy polling. Polls should ask precise questions, such as "Who will you vote for." The use of vague questions suggests that the wordings are being used to manipulate the poll results.

One wonders whether other polls were taken for the same client and rejected because the numbers didn't look right.

No matter how you feel about these questions, merely reporting the poll results along with Gregg's opinions is irresponsible journalism.

Journalists have an obligation not merely to present matters of opinion, such as polls, to their readers, but to provide others with the opportunity to assess their reliability.

To simply present poll results as fact is nothing short of propaganda.

(One of the things I really dislike about Moodle is that I have to use the website to reply to a post - I get it in my email, I'd rather just reply in my email.) Anyhow...

It occurs to me on reading this that the assembly line can and should be considered a primitive form of connectivism. It embodies the knowledge required to build a complex piece of machinery, like a car. No individual member of the assembly line knows everything about the product. And it is based on a mechanism of communication, partially symbolic (through instructions and messages) and partially mechanical (as the cars move through the line).

The assembly line, of course, does not have some very important properties of connectivist networks, which means that it cannot adapt and learn. In particular, its constituent members are not autonomous. So members cannot choose to improve their component parts. And also, assembly line members must therefore rely on direction, increasing the risk they they will be given bad instructions (hence: the repeated failures of Chrysler). Also, they are not open (though Japanese processes did increase the openness of suppliers a bit).

It is important to keep in mind, in general, that not just any network, and not just any distributed knowledge, qualifies as connectivist knowledge. The radio station example in particular troubles me. It is far too centralized and controlled. In a similar manner, your hard drive doesn't create an instance of connective knowledge. Yes, you store some information there. But your hard drive is not autonomous, it cannot opt to connect with other sources of knowledge, it cannot work without direction. It doesn't add value - and this is key in connectivist networks.

Response: Jeffrey Keefer

Stephen, when you said "But your hard drive is not autonomous, it cannot opt to connect with other sources of knowledge, it cannot work without direction. It doesn't add value - and this is key in connectivist networks," you seem to be speaking about people who have the freedom to act independently toward a goal, which is something that those on the assmbly line in your earlier example are not necessarily free or encouraged to do. If they are directed and not free, it seems that they are more like independent pieces of knowledge or skills, that strategically placed together makes something else. If that can be considered connectivism, then what social human endeavor (from assembling food at a fast food restaurant to preparing a team-based class project to conducting a complex surgical procedure) would not be connectivistic?

Yeah, I was thinking that as I ended the post but didn't want to go back and rewrite the first paragraph.

Insofar as connectivism can be defined as a set of features of successful networks (as I would assert) then it seems clear that things can be more or less connectivist. That it's not an off-on proposition.

An assembly line, a fast-food restaurant -- these may be connectivist, but just barely. Hardly at all. Because not having the autonomy really weakens them; the people may as well be drones, like your hard drive. Not much to learn in a fast food restaurant.

One of the things to always keep in mind is that connectivism shows that there is a point to things like diversity, autonomy, and the the other elements of democracy. That these are values because networks that embody them are more reliable, more stable, can be trusted. More likely to lead, if you will, to truth.

Karyn Romeis writes:

What I really am struggling with is this: "The radio station example in particular troubles me. It is far too centralized and controlled." Please, please tell me that you did not just say "Let them eat cake".

I presume that the people who make those calls to the radio station do so because they have no means of connecting directly to the electronic resources themselves. Perhaps they do not even have access to electricity. In the light of this, they might be expected to remain ignorant of the resources available to them. However, they have made use of such technology as is available to them (the telephone) to plug into the network indirectly. They might not be very sophisticated nodes within the network, but they are there, surely? It might be clunky, but under the circumstances, it's what they have: connection to people who have connection to technology. Otherwise we're saying that only first world people with direct access to a network and/or the internet can aspire to connectivism. Surely there is space for a variety of networks?

What concerns me about the use of radio stations is the element of control. It is no doubt a simple fact that there are things listeners cannot ask about via the radio method. And because radio is subject to centralized control, it can be misused. What is described here is not a misuse of radio - it actually sounds like a very enlightened use of radio. But we have seen radio very badly misused, in Rwanda, for example.

You write, "Otherwise we're saying that only first world people with direct access to a network and/or the internet can aspire to connectivism. Surely there is space for a variety of networks?" I draw the connection between connectivism and democracy very deliberately, as in my mind the properties of the one are the properties of the other. So my response to the question is that connectivism is available to everyone, but in the way that democracy is available to everyone. And what that means is that, in practice, some people do not have access to connectivist networks. My observation of this fact is not an endorsement.

Yes, there is a space for a variety of networks. In fact, this discussion raises an interesting possibility. Thus far, the networks we have been talking about, such as the human neural network in the brain, or the electronic network that forms the internet, are physical networks. The structure of the network is embodied in the physical medium. But the radio network, as described above, may be depicted as a network. The physical medium - telephone calls and a radio station - are not inherentlya network, but they are being used as a network.

Virtual networks allow us to emulate the functioning of, and hence get the benefit of, a network. But because the continued functioning of the network depends on some very non-network conditions (the benevolence of the radio station owner, for example) it should be understood that such structures can very rapidly become non-networks.

I would like also in this context to raise another consideration. That is related to the size of the network. In the radio station example described, at best only a few hundred people participate directly. This is, in the nature of things, a very small network. The size of the network does matter, as various properties - diversity, for example - increase through size increases. As we can easily see, a network consisting of two people cannot embody as much knowledge as a network consisting of two thousand people, much less two million people.

In light of this, I would want to say that the radio station example, at best, is not the creation of a network, but rather, the creation of an extension of the network. If the people at the radio station could not look up the answers on Google, the effectiveness of the call-in service would be very different. So it seems clear here that physical networks can be extended using virtual networks.

This is somewhat like what George means when he says that he stores some of his knowledge in other people (though it is less clear to me that he intends it this way). His knowledge is stored in a physical network, his neural net, aka his brain. By accessing things like the internet, he is able to expand th capacity of his brain - the internet becomes a virtual extension of the physical neural network.

Note that this is not the same as saying that the social network, composed of interconnected people, is the same as the neural network. They are two very different networks. But because they have the same structure, a part of one may act as a virtual extension of the other.

This, actually, resembles what McLuhan has to say about communications media. That these media are extensions of our capacities, extensions of our voices and extensions of our senses. We use a telescope to see what we could not see, we use a radio to hear what we could not hear. Thought of collectively, we can use these media to extend our thought processes themselves. By functioning as though it were a brain, part of the wider world, virtually, becomes part of our brain.

Jurgen Habermas talks about communicative action in the public sphere as an essential component of democracy. I see the process that we are using( and discussing) as a form of communicative action and discussion groups such as this are exemplars of the activity that Habermas championed. I hope someone more versed in sociological theory can clarify because it seems that some of the conditions that got Jurgen thinking are coming around again. (excellent Habemas interview video on YouTube)

The other point picks up on Stephens comment about George' comment regarding storing knowledge or data in other people. Societies have always done that, from the guys that memorize entire holy texts, elders/hunters/ warriors in various societies as repositories of specialized wisdom.

Society relies on implicit skills and knowledge, the kind that can't be written down. Julian Orr's fabulous thesis "Talking About Machines, An Ethnography of a Modern Job" describes the types of knowledge that can't be documented, must be stored in other people. He points out that you can read the company manual but knowledge doesn't come until coffee time (or the bar after work) when one of the old timers tells you what it really means. Narrative processes are key. Developing the appropriate, context- based skill sets for listening to the stories, to extract the wheat from the chaff, is a critical operation in informal learning.Storing knowledge is what Academia was partly about, storing the wisdom of western civilization in the minds of societies intellectuals and paying considerable amounts of public monies to have them process and extended our collective knowledge.

All through, there are examples of the mechanisms necessary to to access and participate in collective wisdom. You have to know the code, speak the language, use the proper forms of address, make the proper sacrifices, say the proper prayers, use APA format, enter the proper username and password. The internet expands the possibilities of this function as humans evolve toward a collective consciousness ala Teilhard de Chardin's noosphere. Welcome to Gaia.

First, a lot of people have talked about the importance of discourse in democracy. We can think of Tocqueville, for example, discussing democracy in America. The protections of freedom of speech and freedom of assembly emphasize its importance.

And so, Habermas and I agree in the sense that we both support the sorts of conditions that would enable an enlightened discourse. openness and the ability to say whatever you want, for example. But from there we part company.

For Habermas, the discourse is what produces the knowledge, the process of arguing back and forth. Knowledge-production (and Habermas intended this process to produce moral universals) is therefore a product of our use of language. It is intentional. We build or construct (or, at least, find) these truths.

I don't believe anything like this (maybe George does, in which case we could argue over whether it constitutes a part of connectivism ;) ). It is the mere process of communication, whether codified intentionally in a language of discourse or not, that creates knowledge. And the knowledge isn't somehow codified in the discourse, rather, it is emergent, it is, if you will, above the discourse.

Also, for Habermas, there must be some commonality of purpose, some sense of sharing or group identity. There are specific 'discourse ethics'. We need to free ourselves from our particular points of view. We need to evaluate propositions from a common perspective. All this to arrive at some sort of shared understanding.

Again, all this forms no part of what I think of connectivism. What makes the network work is diversity. We need to keep our individual prejudices and interests. We should certainly not subsume ourselves to the interests of the groups. If there are rules of arguing, they are arrived at only by mutual consent, and are in any case arbitrary and capricious, as likely as not to be jettisoned at any time. And if there is an emergent 'moral truth' that arises out of these interactions, it is in no way embodied in these interactions, and is indeed seen from a different perspective from each of the participants.

Now, also, "The other point picks up on Stephens comment about George' comment regarding storing knowledge or data in other people. Societies have always done that, from the guys that memorize entire holy texts, elders/hunters/ warriors in various societies as repositories of specialized wisdom."

This sort of discourse suggests that there is an (autonomous?) entity, 'society', that uses something (distinct from itself?), an elder, say, to store part of its memory. As though this elder is in some sense what I characterized as a virtual extension of a society.

But of course, the elder in question is a physical part of the society. The physical contituents of society just are people ("Society green.... It's made of people!!") in the same way that the physical constituents of a brain network are individual neurons. So an elder who memorizes texts is not anextension of society, he or she is a part of society. He or she isn't 'used' by society to think, he or she is 'society thinking'. (It's like the difference between saying "I use my neurons to think" and "my neurons think").

Again, "Society relies on implicit skills and knowledge, the kind that can't be written down. Julian Orr's fabulous thesis "Talking About Machines, An EthnograpGranovetterhy of a Modern Job" describes the types of knowledge that can't be documented, must be stored in other people."

This seems to imply that there is some entity, 'society', that is distinct from the people who make up that entity. But there is not. We are society. Society doesn't 'store knowledge in people', it stores knowledge in itself (and where else would it store knowledge?).

That's why this is just wrong: "the mechanisms necessary to to access and participate in collective wisdom. You have to know the code, speak the language, use the proper forms of address, make the proper sacrifices, say the proper prayers, use APA format, enter the proper username and password. The internet expands the possibilities of this function as humans evolve toward a collective consciousness ala Teilhard de Chardin's noosphere. Welcome to Gaia."

There are no mechanisms 'necessary' in order to access and participate in the collective wisdom. You connect how you connect. Some people (such as myself) access via writing posts. Other people (such as George) access via writing books. Other people (such as Clifford Olsen) access via mass murder. Now George and I (and the rest of us) don't like what Clifford Olsen did. But the very fact that we can refer to him proves that you can break every standard of civilized society and still be a part of the communicative network. Because networks are open.

A network isn't like some kind of club. No girls allowed. There's no code, language, proper form of address, format, username or password. These are things that characterize groups. The pervasive use of these things actually breaks the network. How, for example, can we think outside the domains of groupthink if we're restricted by vocabulary or format?

The network (or, as I would say, I well-functioning network) is exactly the rejection of codes and language, proper forms of address, formats, usernames and passwords. I have a tenuous connection (as Granovetter would say, 'weak ties') with other members of the network, formed on the flimsiest of pretexts, which may be based on some voluntary protocols. That's it.

From the perspective of the network, at least, nothing more wanted or desired (from our perspective as humans, there is an emotional need for strong ties and a sense of belonging as well, but this need is distinct and not a part of the knowledge-generating process).

To the extent that there is or will be a collective consciousness (and we may well be billions of entities short of a brain) there is no reason to suspect that it will resemble human consciousness and no reason to believe that such a collective consciousness will have any say (or interest) in the functioning of its entities. Do you stay awake at night wondering about the moral turpitude if each of your ten billion neurons? Do you even care (beyond massive numbers) whether they live or die?

Insofar as a morality can be derived from the functioning of the network, it is not that the network as a whole will deliver unto us some universal moral code. We're still stuck each fending for ourselves; no such code will be forthcoming.

At best, what the functioning of the network tells us about morality is that it defines that set of characteristics that help or hinder its functioning as a network. But you're still free to opt out; there's no moral imperative that forces you to help Gaia (there's no meaning of life otherwise, though, so you may as well - just go into it with your eyes opem, this is a choice, not a condition). be, it might be said, the best neuron you can be, even though the brain won't tell you how and doesn't care whether or not you are.

This is what characterizes the real cleave between myself and many (if not most) of these other theorists. They all seem to want to place the burden of learning, of meaning, if morality, of whatever, into society. As though society cares. As though society has an interest. As though society could express itself. The 'general will', as Rousseau characterized it, as though there could be some sort of human representation or instantiation of that will. We don't even know what society thinks (if anything) about what it is (again - ask yourself - how much does a single neuron know about Descartes?). Our very best guesses are just that -- and they are ineliminably representations in human terms of very non human phenomena.

Recall Nietzsche. The first thing the superman would do would be to eschew the so-called morality of society. Because he, after all, would have a much better view of what is essentially unknowable. The ease with we can switch from saying society requires something to saying society requires a different things demonstrates the extent to which our interpretations of what society has to say depend much more on what we are looking for than what is actually there.

Sunday, February 18, 2007

I had a look at the Creative Commons discussion draft. What they are doing is presenting a decision-tree type approach. "Is it x? If yes, it is commercial. If not, is it y? If yes, it is commercial. If not,..."

This is one way to make such a determination. But there are other ways we could do the same thing (there is an interesting parallel between ways of making a decision and ways of defining a game - see http://www.downes.ca/post/11 - this parallel is also reflected in the differences of types of artificial intelligence - the 'branching story' reminds one of Newell and Simon's General Problem Solver.)

The point is that the branching story - and indeed most any other formal process - will always leave gray areas. The attempt to define 'noncommercial' more precisely leads us down a slippery slope based on the assumption that it can be done.

My own feeling is this: if you have to ask whether or not your use is commercial, it's commercial. The very precise definitions are being used to weasel the maximum advantage out of the definition of 'noncommercial' rather than any genuine desire to respect the intent. The only people who are actually interested in the definition of 'noncommercial' are those commercial users hoping to find a loophole. Which is exactly what the precise definition of 'noncommercial' allows them.

My own view is that the test for 'noncommercial' is very simple: "Is it being used to make money? yes - it's commercial. No - it's not." Any further attempt at a definition constitutes an attempt by a person using it to make money to make it appear as though they're not.

Take, for example, the question about commercial or noncommercial users raised earlier this week. The presumption here is that there could be a noncommercial *use* undertaken by some commercial user that would allow the use to be characterized as noncommercial.

This is a sleight of hand. By definition, every activity undertaken by a commercial entity is commercial. Commercial entities exist solely for the purpose of making money. They may be engaged in acts that benefit the community, but that it only because benefiting the community is a reliable way of making money.

You may say that the commercial entity may be engaging in genuine charity work. Certainly, corporations have been successful in designating some of their activities as charitable activities. The 'Ronald McDonald House' springs to mind. So suppose McDonald's uses my image to promote Ronald McDonald House. Is the use noncommercial?

When McDonald's may be able to fool the legal system but they're not fooling me. The McDonald's name and logo are plastered all over that charitable entity. It constitutes a part of McDonald's continuing attempts to brand themselves as a children's product (a branding I find morally reprehensible, but I digress). It is a commercial activity, as is the vast bulk of corporate 'charitable' activities.

The point here is, if you allow this camel's nose into the tent, you are not in a position where it will be necessary to look at all sorts of different types of uses in an attempt to determine whether or not they are commercial. Because the primary determinant, whether or not it is used to make money, has been taken off the table.

My feeling is that the mechanism of determination whether something is commercial or noncommercial should not cater to this misuse. If there is any sort of question as to whether the use is commercial, the presumption should be that it is commercial. This places the onus on the user to query whether the use is allowed.

Yes, I know thta the purpose of Creative Commons is to eliminate the need for such queries. And Creative Commons does have a mechanism for eliminating such queries: the By license. You do not *have* to use the noncommercial clause. The fact that you *are* using it suggests that commercial use is a matter of concern to you. Which means you are *exactly* the sort of person who will be off-put by some company walking a legal tightrope to have their commercial use declared 'noncommercial'.

'Commercial use' should be defined as an 'I know it when I see it' phenomenon. Whether a use is commercial or noncommercial is a matter of *recognition* rather than rule or legislation. Defining it this way does not allow the ethically dubious to sneak through a loophole to defeat the intent of the clause. It opens the way for obviously legitimate noncommercial uses, such as positing on a personal website, while closing marginal commercial uses, like posting on a fake personal website.

There is a tendency, especially on the part of lawyers, to try to define the minutiae of the law. This tendency should be resisted. Leave the gray area reasonably large, and hence, the scope for human judgment and recognition equally large.

Saturday, February 17, 2007

Responding to Stephen Carson. I wanted to post this as a comment, but his blog requires that you be logged in to post a comment, and then provides no way to register, making commenting impossible.

Stephen Carson takes issue with my comments in a recent eLearn article where I distinguish between MIT's OpenCourseWare style of open educational resources (OERs) and the Open University'. In a nutshell, the former consists of the handouts and related materials used to support an in-person class, while the latter consists of self-study materials.

Carson is right when he asserts that I prefer the latter over the former, for reasons I'll get to in just a moment. He nonetheless appears to rather misinterpret the gesture I made in calling the one form 'green' and the other 'gold'. Perhaps he is not familiar with the open archiving movement. Proponents such as Steven Harnad use the green-gold system to argue that both are acceptable, though they constitute different forms of access.

I was trying to say that while I don't think the OCW approach was everything it could be, I was nonetheless supportive.

But why would I take the stance I did in the first place? Carson takes issue with me when I say this: "The understated message in an initiative such as OCW is that an MIT education is not equivalent to the resources that support the education, that it consists essentially of the contact with the professors and the community that develops among the students."

Well, yeah. But the reason I say this is that this is what MIT staff said when OCW was launched, and what they continue to say to this day. I am not the one saying that OCW is not a complete package, MIT is the one saying this.

Now of course I continue on to criticize this approach in a way that MIT staff obviously would not. I ask, contra MIT staff, "is the development of an institution and a class, whether online or in person, necessary in order to translate digital content into learning?" Remember, I am not the one saying that OCW is not a complete education. MIT staff are the ones saying it.

And, it seems to me, that if MIT staff are saying that OCW is not a complete education, and that if OCW was developed and implemented by MIT staff, then it was a deliberate policy intent of MIT to not provide a complete education. The materials would be helpful, but not in themselves enough. That's what they said. So, obviously, what you would need, in addition to OCW materials, is MIT staff.

Now I would suggest that Stephen Carson not get all huffy with me for merely repeating what MIT officials told the world.

My suggestion in the article is that the creation and distribution of complete self-study packages, a la the Open University, is better. I say this, not simply because OCW 'does not address my agenda', but because I believe that there are good reasons to believe full self-study materials are better than incomplete course resources.

Carson himself makes it clear why you would want to offer complete self-study resources rather than ones that are designed to be supplemented by in-person instruction: "the data we’ve developed [PDF - 9.0MB] demonstrating that the vast majority of use of OCW is self-learning independent of institution and classroom." In other words, exactly the use not intended by MIT staff when they developed OCW.

Well, of course, this was always going to be the case, wasn't it? Only a few rich people can afford the personal tutelage of MIT professors, or even those of affiliated institutions. The vast majority of people accessing the materials, and particularly those outside the western nations, cannot afford professors. So they make do with the materials, even though they're incomplete.

That's what makes the materials good. That's what makes them 'green'. Because people can make do with them. But surely it is not unreasonable for me to prefer materials explicitly designed to support self-study over materials designed to make it harder. No?

Carson explains the point of his objection to my characterization: "the reason Stephen’s comments irk me is that they are exactly the kinds of comments likely to discourage broad participation in open sharing." In particular, "Stephen dings us on the one hand for appearing elitist and then turns around and sets a gold standard for open sharing that very few other schools are going to be able to meet."

Well, if MIT staff had come out several years ago and said something like, "We would like to be able to offer full self-study materials, but this is not our expertise, so we're going to do what we can," that would be one thing, and I could very easily have lived with that explanation.

But instead what we got was a stuffy, "Of course, it's not an MIT education," which makes me think, well aren't we so lucky they're allowing access for their leavings for us plebes. Now that's not an accurate picture either - I know as much as anyone how much work it has been to put OCW together and to make it available. But that's how it sounds - and I'm not the one making it sound that way.

A little humility would wear well on MIT, some sort of admission that it did not invent everything and cannot solve all the world's problems. If MIT cannot produce materials up to the quality of the Open University's, well, that's OK, they're still good. They're 'green'. Not 'gold', sure, but nonetheless, still worth supporting.

Would the development of materials up to the Open University's standard deter broad participation in open sharing? It's hard to say, though it's worth nothing that the Open University does now exist, and so we're going to see whether the deployment of that model has any impact. I don't see why it would. I haven't seen any slowdown in activity since Open University's announcement. If anything, it has sped up. Sure, the bar is higher now. But I don't see people throwing up their hands and saying "Oh it's too hard for us." And why would they? Most academic I know - even ones who aren't from MIT - think that they could improve on existing materials. That's what drew them to the profession in the first place.

So, yeah. I'm going to criticize the attitude. I'm going to criticize the proposition that you need the tutelage of MIT professors to get an MIT-quality education. I'm going to question why the OCW Consortium site offers no community function (this is not 1995, after all - we've evolved well beyond links). I'm going to wonder why participation needs to be mediated through a secretive email exchange.

And let me emphasize, since this point seems to have been missed: I support OpenCourseWare. I think it's a good thing. I am pleased that MIT has spent the time and money to make these resources available. This support is pretty much unqualified.

Just... if something better comes along, I'm going to say it's better. In what would would I not do this?

p.s. I did not write the article for ScienceGuide (I have never even heard of them). The source Carson cites, ScienceGuide, has published a copy of the articletaken from eLearn. Not that I care, but eLearn Magazine may have something to say about it.

Pretty nifty. My main complaint is that it set my user name for me, and didn't allow me to choose my own (if you're poking around in the database you can change it to 'Downes' for me, to match all my other accounts in places).

Now - yes, it's distributed, in that I don't need to use specific software (such as ELGG) to create my friends network. Nothing against ELGG, it's great, but I use my own software. So this is a really nice feature. It allows me to join networks where ELGG used to cut me off.

But... all the friends lists are centrally hosted, right? And the actual software (that I call with the JS) is also centrally located. No, that's not evil or anything, but it means you're playing in the same grounds as, say, MyBlogLog (and they have that digerati boost behind them, so...)

How to make it more distributed? Well, first, allow people to store the list of friends *anywhere*. Of course, this means the list won't be in a DB any more. You'll have to code it in XML and allow it to be placed as a r/w file on some server. Which means that the code will need the username and password to that location.

Then, second, allow people to place instances of the software on their own server. People will like this because when the central repository of names inevitably slows down (all these widgets bog down when they get popular, theirs won't. Also, they'll feel more comfortable giving access information to a local script.

This is good, now, what you have is basically a widget that manages your FOAF file for you. But at least you're ahead of MyBlogLog.

Next thing to do is to integrate this with OpenID. After all, if people are installing a friends widget on their servers, they may as well install their OpenID at the same time. This is handly, because it allows the list of friends to be synced with the list of OpenID IDs.

Also develop a multiuser version. This is the one that you'll sell. It's still a very simple application (and you wat to keep it that way). Or you might give away a multiuser version but sell a 'pro' with service and support. Your call. You also want to keep the hosted version, so people don't have to install software if they don't want to. Of course, now your hosted version is just your local instance of the pro - and if it gets busy you can just create another instance of the pro on another machine. Centralized computing is for the birds.

Now what you want to do is to add an API. I can think of one really simple API that would just kill: AllowComment. This API is available to whatever system handles comments on your website. If called by the comment system, it answers that a commenter either is or is not in your social network, and adjusts commenting privileges accordingly.

Because the commenting system knows the identity of the commenter, it can also (optionally) fire a copy of the comment back to the commenters own blog or website. A lot like CoComment, except you're not installing a browser plugin. And you get the comment directly, not via some central comment management system.

If you want to get fancy, integrate this with email, so you can use your social network to screen your email.

That's pretty much it. Don't try to add the whole web into the same application. Keep the application itself very simple - let other software manage the website and content creation things.

Keep it simple. No big installs. If you've written Explode in Rails or something you'll have to sart over. Try to keep it to one simple script with a few user parameters.

Where are you going to make your money? That's easy - by creating matching or filtering services on top of this basic network. The individual 'pro' version will help you, if you want, find people with similar interests and preferences.

Sunday, February 11, 2007

Dave Pollard says, "Our community has a unique program called RoadWatch that provides a citizen report form for dangerous, careless and aggressive driving, which requires you to hand-deliver... We need something better. How would you design it?"

My response is below (his comment form is responding with an error 403 'Forbidden' so it might not appear there).

Simple.

Participating cars have digital video cameras attached to them. What they see is what you see out the front window. They are always running.

There is a 'flag as inappropriate' button on your steering wheel. Just like on Google video. When you hit this the video 60 seconds before and after the button push is sent to authorities. Video from nearby vehicles is also retrieved, if possible, for correlation. Staff review the video and determine whether penalties should apply.

The penalty should be suspension of the vehicle license for a certain period of time. This way rich and poor suffer the same consequences; you don't get a license to drive badly just because you have money.

Each driver gets an aggregate reliability value, based on the number of reported instances and the number of genuine versus frivolous reports. Video from more reliable drivers is reported first; reliable drivers get a premium.

Participation in the video program is rewarded. You are paid for having your camera turned on, not for reporting incidents. Your camera is also used to monitor road conditions (for repairs, for snow plowing, for traffic reports).

You buy the video equipment yourself and have a mechanic or specialist install it. You then receive a discount on the price of gasoline; a couple of cents per litre. For cab drivers and delivery vehicles, the equipment pays for itself within a month.

A lot of the video is retrieved and analyzed automatically. Feature recognition programs, for example, detect and measure potholes (it helps that the videos are time-stamped and GPS-located, so you can get the same view of an entity multiple times). Automatic analysis also presents proposed snowplow and salting routes and schedules on an hourly basis; these are so reliable it becomes almost automatic to simply key in the approval.

Needless to say, people who have the video cameras installed themselves become better drivers. Nonetheless, in order to generate acceptance, laws are drafted into force ensuring that your own video can never be used to convict you of an offense.

If the video system is taken into court as an unreasonable surveillance, it can be defended on freedom of speech grounds. The system is nothing more than a mechanism to make it easier to report incidents. That is why the reporting of offenses always requires a human intervention.

That said, video analysis is used to train drivers to recognize incidents. Drivers can play simulations on their computer and practice spotting and flagging cases.

Saturday, February 10, 2007

Most people prefer to be somewhere in the middle on a sliding scale, and political opinion is no different.

So what the Republicans did, through the use of extreme viewpoints like Rush Limbaugh, Anne Coulter and Pat Robertson, is to shift the scale off to the right!

So now their former position - an hard right conservatism - now occupies the centre. And becomes the default choice. That's how we see 'balance' attained on talk shows by having two shades of right wing represented.

You're doing pretty much the same thing here. Take, for example, the scale between 'hours', '15 minutes', '3 minutes'. Well the centre and the right are both informal learning selections. Why not a scale that represents the choices I had as an instructor: '3 hours','1.5 hours', '50 minutes'?

What's interesting is that the other thing you're doing (and George Siemens does this too, and I just haven't found the words to express it) is that you are co-opting the *other* point of view as part of your point of view.

It's kind of like saying, "I support informal learning, except when I don't." George does the same thing when he describes Connectivism. "I don't care whether you call it social constructionism." I am not sure how to react - are you saying there is no fundamental difference between your position and the other position?

What is happening here is that an attempt is being made to made what is actually a fairly radical position seem moderate by saying something like, "Oh no, it's the same thing you were doing, it's just tweaking a few variables."

It's fostering the 'science as cumulative development' perspective where, most properly, it should be a 'science as paradigm shift perspective'. I don't think it's an accurate representation of the change that should be happening.

Company A wants employee B to take training course Z. Who makes the decision, the company or the employee? This is a binary switch - you can't say "they both make the decision" - that's corporate newspeak for saying "the company does".

The sliding scale disguises this by using the general term 'control'. But the point here is: either the employee is being told what to learn (some of the time, all of the time, whatever) or he or she is not. No sliding scale.

A lot of the scales are like that. They are very reassuring for managers (to whom you have to sell this stuff, because the employees have no power or control). You are telling the managers, "You don't have to relinquish control, it's OK, it will still be informal learning." But it won't be. It will just be formal learning, but in smaller increments.

In addition, the scales lock-in the wrong value-set. It's like presenting the students with the option: what kind of classroom would you like, open-concept, tables and chairs, rows of desks? Looks like a scale, but the student never gets the choice of abandoning the classroom entirely.

The 'time to develop' and the 'author' scales, for example, both imply some sort of 'learning content'. What sort? As determined by the 'content' scale. Something that is produced, and then consumed. It is manifestly not, for example, a conversation. It creates an entity, the 'resource', and highlights the importance of the resource.

The people who produce stuff will be relieved. Learning can still be about the production and consumption of learning content. They can still build full-length courses and call it 'informal learning'.

Everybody's happy. Everybody can now be a part of the 'informal learning' bandwagon.

What the slider scales analogy does is to completely mask the *value* of choosing one option or another. If you pick 'more bass' or 'more treble' there really isn't a right or wrong answer; it's just a matter of taste.

But if there is something to informal learning, then there should be a sense in which you can say it's better than the alternative. Otherwise, why tout it?

You might say, well it is better, but there's still those 20 percent of cases where we want formal learning.

Supposing that this is the case, then what we want is a delineation of the conditions under which formal learning is better and a those under which informal learning is better. The slider scale allows an interpretation under which everything can be set to 'formal learning' and it's still OK.

To make my point, consider the criteria I consider to be definitive of successful network learning, specifically, that networks should be:- decentralized- distributed- disintermediated- disaggregated- dis-integrated- democratic- dynamic- desegregated

Now again, any of these parameters can be reduced to a sliding scale. 'Democratic' can even be reduced to four sliding scales:- autonomy- diversity- openness- connectedness

But the underpinnings of the theory select *these* criteria, rather than merely random criteria, because *these* specify what it is *better* to be.

'Autonomy' isn't simply a sliding scale. Rather, networks that promote more autonomy are *better*, because they are more *reliable*. If you opt for less autonomy, you are making the network less reliable. You aren't simply exercising a preference, you are *breaking* the network.

Now there will be cases - let's be blunt about it - where it will be preferable to have a broken network.

Those are cases where learning is *not* the priority. Where things like power and control are the priority. A person may opt to reduce autonomy because he doesn't *care* whether it produces reliable results.

There may be other cases where the choice of a less effective network is forced upon us by constraints. If it cost $100 million to develop a fully decentralized network, and $100 thousand to develop a centralized network, many managers will opt for the less reliable network at a cheaper price.

But the point here is that there is no pretense that the non-autonomous centralized systems constitute some version of network learning simply because they are, say, dynamic. For one thing, the claim is implausible - the criteria for successful network are not independent variables but rather impact on each other. And for another thing, the reduction of any of the conditions weakens the system so much that it can no longer be called network learning.

It's kind of like democracy. Let's, for the same of argument, define 'democracy' as the set of rights in the charter of rights:- freedom of speech- freedom of the press- freedom of conscience- freedom of assemblyetc.

Take away one of them - freedom of speech, say. Do you still have democracy? What good is freedom of the press, or freedom of assembly, without freedom of speech?

Bottom line:

If there is anything to the theory of informal learning, then the values it expresses are more than just preferences on a sliding scale.

Representing them that way serves a marketing objective, in that it makes people who are opposed to the theory more comfortable, because it suggests they won't really have to change anything.

But it is either inaccurate or dishonest, because it masks the *value* of selecting one thing over another, and because it suggests that you can jettison part of the theory without impacting the whole.

An in the case of the particular scales represented here, the selection locks people into a representation of the theory that is not actually characteristic of the theory. Specifically, it suggests that informal learning is just like formal learning in that it is all about the production and consumption of content.

And I think this whole discussion points to the dilemma that any proponent of a new theory faces: whether to stay true to the theory as conceived, or whether to water down the theory in order to make it more palatable to consumers and clients (some of whom my have a vested interest in seeing the theory watered down).

And it seems to me, the degree to which you accept the watering down if the theory, is the degree to which you do not have faith in it.

If informal leaning *really* about duration, content, timing and the rest? Probably not. But if not, then what is it about? What are the *values* expressed by the theory?

Responding to John Martin: "However I am not sure that it (pattern recognition) is an innate feature as opposed to a learned skill."

It is both.

Human neurons naturally form connections. That is, in fact, their sole function. Any time they are presented with input (such as experience) they will react by strengthening or weakening connections. Because these connections are sensitive to input, they will reflect patterns in that input. This is not a conscious act; it is not the same as saying we are looking for patterns. It's more like the way you distinguish between red and blue. You just do it.

After a certain period of time, this process results in a base of pre-existing patterns of connectivity in the mind. The child, for example, has learned to identify objects. Slightly older children, for example, have learned to recognize faces. These pre-existing patterns now influence the recognition of patterns in perception.

There comes a point where the recognition of patterns in the environment will depend entirely on the influence of these pre-existing patterns. The distinction between subtle shades of red, for example, that the artist can make. The ability to identify a type of wine. The capacity to apply mathematical forumulas to equations. In such cases, it would be correct to say that pattern recognition is entirely a learned ability.

Friday, February 09, 2007

My experience is that if you leave your career in the hands of others, they will actively damage your career. So I think some of the points should be even more strongly worded.

For example:

You write, "No one will tell you what experience you should be obtaining, let alone help you get it." Strictly speaking, this isn't true. They will recommend al sorts of experiences - company training courses, for example. But they will be the wrong experiences.

And you write: "Manage your own career. No one else will." Again, someone else will. They will tell you what you should do, what you are allowed to do, and what you should not do. In so doing, they will manage your career into the ground.

The point here, too, is that you should do more than what you were simply hired to do. But not necessarily more for the company. When you are at work, working on your career, you should understand that you are working, first, for your own benefit. Any benefit the employer gets out of it is an exchange of mutual value. And the employer should never get everything.

As they used to say to people climbing around the rigging on the high seas: one hand for the ship, one hand for yourself.

Wednesday, February 07, 2007

I realize that the tendency is to think that identity management can be assigned to a (government) office, but it's not that simple.

Identity - like a name - is something that is not merely associated with a person but which is owned by a person.

Paramount among the rights of a personal identity, for example, is the right to *not* identify oneself.

Remember when Radio Shack always used to get everyone to fill out their name and address merely in order to buy a transistor? This was something that perplexed and annoyed electronics buffs.

Whether I want to I buy a coffee or to ask for a 1998 tax form, I want to be able to do so without saying who I am.

For this reason, centralized identity systems - such as the one being alluded to in this article - failThey simply do not assign enough (or any) control to the identity owner.

That is why the best hope for online identity is some system of distributed self-identification, such as is postulated by OpenID.

Because, in the end, it is not the security needs of government and business that will drive ID adoption. That is just wishful managers clinging to the fantasy that consumers actually care about government or corporate needs.

No, the having and use of an identity must satisfy genuine needs of the identity owner. It must be in the owner's interest to genuinely and truthfully report his or her identity.

That's why ATM cards work. Accurately reporting one's identity grants one access to their money. And the motive to keep the password secret - essential for authentication - is the desire to keep other people away from that money.

It is interesting to note that while credit cards - which granted people money they didn't have - caught on instantly, it took the establishment of ATMs before corresponding debit cards became popular. People would not establish a special identity *simply* for paying businesses.

For most people, government is something they pay into, not something they get a benefit from. So there is no intrinsic motivation to cooperate with a government identity scheme.

And, indeed, there would be outright resistance were the scheme tied to such things as social insurance numbers (used to grant the bearer access to Employment Insurance) and health care numbers (used to grant access to health care services).

Again, this is because there is no benefit to the identity holder, and a loss of control.

Heck, I don't even want my Flickr identity to be known by Yahoo - I want to keep these domains completely separate - because the only thing I get from Yahoo is spam.

So in the consideration of identity and government services, while it is tempting to look for ways to find technology to adapt to the government's needs, it will also be necessary to reassess those needs to adapt to the new realities of identity.

Tuesday, February 06, 2007

There is a very big difference between our putting walls around our own space, and other people putting walls there.

When we build or buy (or rent) our own walls, we choose when they are open or closed - when the door is locked or unlocked, who we let in, whether the curtains are drawn or open, etc.

That's called a home.

When other people control the walls, they choose whether or not to open them (which is why the invasion of search engines in Friendster comes as a rude surprise), whether the door is locked or unlocked (which is why having your personal data owned by Fox is discomfiting), etc. When other people control the walls, you can't simply pack up your (digital) possessions and leave.

That's called a prison.

Of course even these generalizations are misleading.

Sometimes our own home is a trap. Sometimes we wall ourselves off from the rest of the world, keeping ourselves apart in ways that are not healthy. It's like when the emergency services can't gt through your front door to respond to 911. Or when we hide in the basement and pretend the tsunami outside is not real.

And sometimes the prison is a sanctuary. When we cannot afford walls of our own, or when we are in danger of being pursued by predators, or we need a place for a large group of us to meet in private, then we want a place with high walls and guards around the perimeter.

Walls - like most other things - are ethically neutral. Neither good nor bad.

It's what we do with them that matters, and what other people do with them to us. If the walls increase both our security and our freedom, then (all else being equal) they are good. If they reduce our security and freedom, they are not so good.

From my perspective, the best wall is one with a door, and the best door is one with a key.

Saturday, February 03, 2007

Posted to the Connectivism Conference forum (which hits a login window - click 'login as guest' (middle of the left-hand column) - I'm sorry, and I have already complained to the conference organizer).

At its heart, connectivism is the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks.

It shares with some other theories a core proposition, that knowledge is not acquired, as though it were a thing. Hence people see a relation between connectivism and constructivism or active learning (to name a couple).

Where connectivism differs from those theories, I would argue, is that connectivism denies that knowledge is propositional. That is to say, these other theories are 'cognitivist', in the sense that they depict knowledge and learning as being grounded in language and logic.

Connectivism is, by contrast, 'connectionist'. Knowledge is, on this theory, literally the set of connections formed by actions and experience. It may consist in part of linguistic structures, but it is not essentially based in linguistic structures, and the properties and constraints of linguistic structures are not the properties and constraints of connectivism.

In connectivism, a phrase like 'constructing meaning' makes no sense. Connections form naturally, through a process of association, and are not 'constructed' through some sort of intentional action. And 'meaning' is a property of language and logic, connoting referential and representational properties of physical symbol systems. Such systems are epiphenomena of (some) networks, and not descriptive of or essential to these networks.

Hence, in connectivism, there is no real concept of transferring knowledge, making knowledge, or building knowledge. Rather, the activities we undertake when we conduct practices in order to learn are more like growing or developing ourselves and our society in certain (connected) ways.

This implies a pedagogy that (a) seeks to describe 'successful' networks (as identified by their properties, which I have characterized as diversity, autonomy, openness, and connectivity) and (b) seeks to describe the practices that lead to such networks, both in the individual and in society (which I have characterized as modeling and demonstration (on the part of a teacher) and practice and reflection (on the part of a learner)).

Tony writes, "Knowledge is not learning or education, and I am not sure that Constructivism applies only to propositional learning nor that all the symbol systems that we think with have linguistic or propositional characteristics. "

I think it would be very difficult to draw out any coherent theory of constructivism that is not based on a system with linguistic or propositional characteristics. (or as I would prefer to say, a 'rule-based representational system').

Tony continues, "The Constructivist principle of constructing understandings is an important principle because it has direct implications for classroom practice. For me it goes much further than propositional or linguistic symbol systems."

What is it to 'construct an understanding' if it does not involve:- a representational system, such as language, logic, images, or some other physical symbol set (ie., a semantics)- rules or mechanisms for creating entities in that representational system (ie., a syntax)?

Again, I don't think you get a coherent constructivist theory without one of these. I am always open to be corrected on this, but I would like to see an example.

Tony continues, "I am disturbed by your statement that "in connectivism, there is no real concept of transferring knowledge, making knowledge, or building knowledge" I believe that if Connectivism is a learning theory and not just a connectedness theory, it should address transferring understand, making understanding and building understanding."

This gets to the core of the distinction between constructivism and connectivism (in my view, at least).

In a representational system, you have a thing, a physical symbol, that stands in a one-to-one relationship with something: a bit of knowledge, an 'understanding', something that is learned, etc.

In representational theories, we talk about the creation ('making' or 'building') and transferring of these bits of knowledge. This is understood as a process that parallels (or in unsophisticated theories, is) the creation and transferring of symbolic entities.

Connectivism is not a representational theory. It does not postulate the existence of physical symbols standing in a representational relationship to bits of knowledge or understandings. Indeed, it denies that there are bits of knowledge or understanding, much less that they can be created, represented or transferred.

This is the core of connectivism (and its cohort in computer science, connectionism). What you are talking about as 'an understanding' is (at a best approximation) distributed across a network of connections. To 'know that P' is (approximately) to 'have a certain set of neural connections'.

To 'know that P' is, therefore, to be in a certain physical state - but, moreover, one that is unique to you, and further, one that is indistinguishable from other physical states with which it is co-mingled.

Tony continues, "Connectivism should still address the hard struggle within of deep thinking, of creating understanding. This is more than the process of making connections."

No, it is not more than the process of making connections. That's why learning is at once so simple it seems it should be easily explained and so complex that it seems to defy explanation (cf. Hume on this). How can learning - something so basic that infants and animals can do it - defy explanation? As soon as you make learning an intentional process (that is, a process that involves the deliberate creation of a representation) you have made these simple cases difficult, if not impossible, to understand.

That's why this is misplaced: "For example, we could launch into connected learning in a way which which forgets the lessons of constructivism and the need for each learner to construct their own mental models in an individualistic way."

The point is:- there are no mental models per se (that is, no systematically constructed rule-based representational systems)- and what there is (ie., connectionist networks) is not built (like a model) it is grown (like a plant)

When something like this is said, even basic concepts as 'personalization' change completely.

In the 'model' approach, personalization typically means more: more options, more choices, more types of tests, etc. You need to customize the environment (the learning) the fit the student.

In the 'connections' approach, personalization typically means less: fewer rules, fewer constraints. You need to grant the learner autonomy within the environment.

So there's a certain sense, I think, in which the understandings of previous theories will not translate well into connectivism, for after all, even basic words and concepts acquire new meaning when viewed from the connectivist perspective.

Response (1) to Bill Kerr

Bill Kerr writes, "It seems that building and metacognition are talked about in George's version but dismissed or not talked about in Stephen's version."

Well, it's kind of like making friends.

George talks about deciding what people make useful friends, how to make connections with those friends, building a network of those friends.

I talk about being open to ideas, communicating your thoughts and ideas, respecting differences and letting people live their lives.

Then Bill comes along and says that George is talking about making friends but Stephen just ignores it.

Bill continues, "Either the new theory is intended to replace older theories... Or, the new theory is intended to complement older theories. By my reading, Stephen is saying the former and George is saying the latter but I'm not sure."

We want to be more precise.

Any theory postulates the existence of some entities and the non-existence of others. The most celebrated example is Newton's gravitation, which postulated the existence of 'mass' and the non-existence of 'impetus'.

I am using the language of 'mass'. George, in order to make his writing more accessible, (sometimes) uses the language of 'impetus'. (That's my take, anyways).

Response (2) to Bill Kerr

Bill Kerr writes, "Words / language are necessary to sustain long predictive chains of thought, eg. to sustain a chain or combination of pattern recognition. This is true in chess, for example, where the player uses chess notation to assist his or her memory."

This is not true in chess.

I once played a chess player who (surprisingly to me) turned out to be far my superior (it was a long time ago). I asked, "how do you remember all those combinations?"

He said, "I don't work in terms of specific positions or specific sequences. Rather, what I do is to always move to a stronger position, a position that can be seen by recognizing the patterns on the board, seen as a whole."

See, that's the difference between a cognitivist theory and a connectionist theory. The cognitivist thinks deeply by reasoning through a long sequence of steps. The non-cognitivist thinks deeply by 'seeing' more intricate and more subtle patterns. It is a matter of recognition rather than inference.

That's why this criticism, "Words / language are necessary to sustain long predictive chains of thought," begs the question. It is leveled against an alternative that is, by definition, non-linear, and hence, does not produce chains of thought.

Response (3) to Bill Kerr

Bill Kerr writes, "I don't see how what you are saying is helpful at the practical level, the ultimate test for all theories."

This is kind of like saying that the theory of gravity would not be true were there no engineers to use it to build bridges.

This is absurd, of course. I am trying to describe how people learn. If this is not 'practical', well, that's not my fault. I didn't make humans.

In fact, I think there are practical consequences, which I have attempted to detail at length elsewhere, and it would be most unfair to indict my own theoretical stance without taking that work into consideration.

I have described, for example, the principles that characterize successful networks in my recent paper presented to ITForum (I really like Robin Good's presentation of the paper - much nicer layout and graphics). These follow from the theory I describe and inform many of the considerations people like George Siemens have rendered into practical prescriptions.

And I have also expounded, in slogan form, a basic theory of practice: 'to teach is to model and demonstrate, to learn is to practice and reflect.'

No short-cuts, no secret formulas, so simple it could hardly be called a theory. Not very original either. That, too, is not my fault. That's how people teach and learn, in my view.

Which means that a lot of the rest of it (yes, including 'making meaning') is either (a) flim-flammery, or (more commonly) (b) directed toward something other than teaching and learning. Like, say power and control.

Bill continues, "Stephen, your position on intentional stance sounds similar to Churchland's position on eliminative materialism."

Quite right, and I have referred to him in some of my other work.

"Other materialist philosophers, such as Dennett, argue that we can discuss in terms of intentional stance provided it doesn't lead to question begging interpretations."

Well, yes, but this is tricky.

It's kind of like saying, "Well, for the sake of convenience, we can talk about fairies and pixie dust as though they are the cause of the magical events in our lives." Call it "the magical stance".

But now, when I am given a requirement to account for the causal powers of fairies, or when I need to show what pixie dust is made of (at the cost of my theory being incoherent) I am in a bit of a pickle (not a real pickle, of course).

The same thing for "folk psychology" - the everyday language of knowledge and beliefs Dennett alludes to. What happens when these concepts, as they are commonly understood, form the the foundations of my theory?

"Knowledge is justified true belief," says the web page. Except, it isn't. The Gettier problems make that pretty clear. So when pressed to answer a question like, 'what is knowledge' (as though it could be a thing) my reponse is something like "a belief we can't not have." Like 'knowing' where Waldo is in the picture after we've found him. It's like recognition. And what is 'a belief'? A certain set of connections in the brain. Except not that these statements entail that there is no particular thing that is 'a bit of knowledge' or 'a belief'.

Yeah, you can talk in terms of knowledge and beliefs. But it requires a lot of groundwork before it becomes coherent.

Bill continues, "Even though we don't understand 'constructing meaning' clearly we can still advise students in certain ways that will help them develop something that they didn't have before."

What, like muscles?

Except, they always had muscles.

Better muscles? Well, ok. But then what do I say? "Practice."

"I think it's more useful and practical to operate on that basis, for example, Papert's advice on 'learning to learn' which he called mathetics still stands up well."

But what if they're wrong? What if they are exactly the wrong advice? Or moreover, what if they have to do with the structures of power and control that have developed in our learning environments, rather than having anything to do with learning at all?

"Play is OK" has to do with power and control, for example. "Play fosters learning" is a different statement, much more controversial, and yet more descriptive, because play is (after all) practice.

"The emotional precedes the cognitive." Except that I am told by psychologists that "the fundamental principle underlying all of psychology is that the idea - the thought - precedes the emotion."

And so on. Each of these aphorisms sound credible, but when held up to the light, are not well-grounded. And hence, not practical.