28 March 2007

Tuesday I had lunch with, and attended sessions conducted by, researchers from DEEP: The Digital Education Enhancement Project at England's Open University. DEEP is geared toward building capacity and access to information re: education in the "global south" - principally sub-Saharan Africa (see full report downloads). But the vision of utilizing the mobile phone - perhaps simply to support teaching by allowing fast connections between rural teachers and their support structures, and mobile-connected handheld devices (either PocketPCs or mobile phones with pre-loaded memory cards) to provide direct learning support, touches on a much larger issue.

To me, the dominant issue in education for those with "disabilities" and for those who are not white-developed world-Protestants, is to shift the focus from a colonial strategy - "make "them" like "us"' - to a context-based approach that allows people to use what is native to their life situation to build the best possible access to information: "Remapping the Education Technology Landscape," Tom Power of the Open University said, "using new models, new dialogues, new research questions, and new toolkits."

Because, one thing that links a dyslexic student in the US with poor students in any underfunded developed world school and students in technology-deprived developing nations, is the fact that "imposed technology" rarely works. Standard computers make life harder in many ways for the dyslexic student, and cannot be maintained among the other two groups. All need technology which is inexpensive, adaptable, and sustainable. For the dyslexic student this might mean keyboarding via mobile phone because of keypad simplicity and effective word prediction, or it might mean handhelds with built in screen readers, or it might mean education via podcast. For those in under-resourced environments it may mean data delivery and primary communication via text-message and the mobile web. Whichever, whatever, the idea is to take what is available and naturally adaptable and to use it, not to build computer labs (which truly make no sense in any school - computers should be integrated into every classroom) in schools with minimal electrical capacity and no tech staff.

There is also the key point of jumping onto the free and ubiquitous. "As many people have gained access to the telephone in the first five years of the 21st Century as in the entirety of the 20th Century," the presenters from OU said, noting that almost all of that gain has been in the developed world. Similarly, the once clumsy and hard to access audio book cassette has been replaced by the instantly downloadable digital text or digital audio book for those with dyslexia. But educators would rather impose the desktop computer on the developing world and the task-specific Daisy Book Reader on the dyslexic.

And finally, I have always been troubled by the way in which, particularly American educators, go out into the world to spread American educational theory, or the way places like my own Michigan State University bring in international students to "Americanize" them and send them home to "Americanize" their culture. Far better, and far more sustainable, is providing context-based technical support that is based in divergent human needs.

27 March 2007

From Futurelab... teaching students to build games for mobiles that teach physics - Newtoons.

Years ago... fifteen years ago, I heard a member of the New York State Board of Regents (the main policy group for all education in New York) talk about how "we do education backwards." "Every five-year-old comes to school ready to learn physics and geometry and storytelling," he said, "but we force them into symbolic language instead. Then, when they're really ready for symbolic language, we force them into really boring physics classes."

I thought of this listening to Futurelab's J. Pykett this morning. Physics is fun, or it should be. Gaming/Etertainment, often used to show a "defiance of physics" - think Star Wars - can also do the opposite - think Stanley Kubrik's 2001 - that is, reinforce it.

Plus the idea of somewhat older students designing engaging "teaching" games for somewhat younger students taps into all we know about good cognitive learning - the best way to learn something is to teach it. And teaching it via handheld gaming promotes efficiency and clarity.

CAL 2007 is underway at Trinity College in Dublin. The subtitle for this year's conference is "Development, Disruption, and Debate" and after one and a third days, it has been fascinating. More soon, but a focus on creativity, "Learner Generated Contexts," and disrupting the traditional norms of teaching and educational structures, make this a great place to consider technology not as a curricular tool, but as a change agent that alters our ideas of what education can be.

Some sessions I've attended...

Creativity is a risky business: How might ICT support a pedagogy for improvisation?A. Loveless*, J. Burton, K. Turvey; University of Brighton, UK

Disruption? What disruption? None are so blind as those who do not envisage a different scenarioK. Johnston*, A. FitzGibbon, E. Oldham; University of Dublin, Ireland

25 March 2007

"The Cyberlink System can perhaps be thought of as the next step in the evolution of the human-computer input interface. The Cyberlink System is controlled by the voltages found on the surface of the forehead. When the muscles of the body contract a corresponding voltage can be detected on the surface of the skin. In a similar fashion the actions of the brain result in the production of voltages that migrate to the surface of the skin. These voltages, which we refer to as bio-potentials, are the signals that drive the Cyberlink system. Since the voltages at the forehead are the result of both brain and body activity the Cyberlink system represents a brain-body actuated control technology. The Cyberlink system combines eye-movement, facial muscle, and brain wave bio-potentials detected at the user's forehead to generate computer inputs that can be used for a variety of tasks and recreations."

All this to say that "the next step" in human-computer interface might be here. I am still dazzled by Erica, the best eye-gaze system on the market, with almost instant set up and rapid typing and computer control possible on the TabletPC platform. Even ADHD me could make that work well.

I'm less sure of the Cyberlink Mindmouse, it seems highly complex, tough to learn (perhaps), and maybe a year or two "away," but it is coming. The Mindmouse is run through matching brainwave patterns to intentions and actions, allowing a computer to respond, essentially, to thoughts. It does work, and that alone is amazing. It is commercially available, also incredible, and it surely can create access for people with no other choices.

The paper by Dr. Mary Christen and Dr. Andrew Junker is worth reading, and Mindmouse is worth watching.

23 March 2007

Because students who need to use Assistive Technology at the university level will most often enter higher education knowing nothing about it - whether in the US, UK, or anywhere else, the need for fast, effective training is essential.

But what kinds of training works? What does it accomplish? How can we make it better? and, perhaps most importantly, who exactly needs to be trained?

Mrs. Draffan, of the University of Manchester (and soon to be at Southampton) presented "Innovative Practices in AT Training for LD (Dyslexic) Students," which looked at the results of pre- and post training surveys in England, where, a strong data-base exists because the government provides grants to students to acquire appropriate devices and conducts the evaluations and the training. And because Mrs. Draffan is one of the leading lights of post-secondary technology, she also provided vast insight into what is really needed.

The first thing is clearly that, in the UK as in the US, primary and secondary education are simply not doing their job (see "Toolbelt Theory" below). In the telephone surveys of 455 dyslexic students only 11% had knowledge of Assistive Technology before their encounters with the evaluation and training programs. Actually, based on my American experience, this seems quite high (which is depressing). So, perhaps, the first people who need training are those running, and teaching in, our primary and secondary schools.

The survey also showed that 95% used prescribed hardware and 91% were satisfied with it. 82% used prescribed software and 91% were satisfied. 75% thought the software was easy to use and 84% thought the hardware was easy to use - but - only 46% had really received formal training.

The highest happiness quotients came from scanners and text-to-speech systems. The greatest difficulties - but also the biggest satisfactions - came from Speech Recognition.

None of this is surprising. All of us who have trained know that it is easier to teach hardware than software - fewer choices, fewer variations, more straight-forward solutions - and we also know that without effective training software gets ery confusing, and Speech Recognition without training typically goes nowhere.

The survey also found that even "satisfied" students might not be using their assistive technology to their best advantage. Typically, Mrs. Draffan said, students used just five of the control features in Text-Help, and were similarly limited with other software. They also seemed to struggle with some of the prescribed choices, reporting problems that were probably predictable in Dragon, and issues with text-to-speech voices that might have been solved by selecting a different software package.

But the biggest difficulties lay in the intersection of this technology and the university environment. After all, I know all about technology, and all about accessibility, but when a professor "hands" me a paper in bitmapped PDF form, I am still going to struggle mightily. I might get a "virtual printer" to access it and then spend hours cleaning it up. Or I might - if the print quality is unexpectedly good - print it out and scan it back in with optical character recognition, but still, the conversion process taps out all the energy I might have spent reading the article itself.

So, who has to be trained? Yes, add university faculty to the list.

OK, but what needs to be done differently in training? Students, Mrs. Draffan pointed out, wanted short training sessions, but preferred them at their home. This is as hard (travel expense for trainers, scheduling) as it is understandable (long training sessions get less effective, public space training, or training on unfamiliar computers, is difficult). Students with laptop computers seemed to do better, because they could carry their own computers into on-campus training. So laptops might be the route -I prefer them for students myself because they make computer use a constant, support in-class activities, and are much easier to carry in for repair when that is needed.

Students also wanted task-based training, training specifically geared to their academic needs. And they wanted "just-in-time" training. This suggests that trainers need to be better connected to the university and the departments within the university, and that training must be far more specific. It also indicates that drop-in centers, call lines, Camtasia "videos," on-line, etc. might need to be experimented with.

But there is something else. Many students rejected training because they felt themselves to be "good with computers." "Good with computers," however, as Mrs. Draffan indicates, rarely translates to effective and appropriate use of AT devices without training. And the training offered in England was all "pre-university-start" training. And two things come into play (in my opinion): First, students do not know what they will need to do in a university before they get there, so the training is disconnected from reality. Second, adolescents do not like to admit to adults that they are weak in tech areas. Mrs. Draffan and I agree that training after the first semester works better, because they have discovered what does not work. But we also know that it is hard to get students back for that second go-round.

Well, the report suggests essential things:1. Training in the right time and place - and that will not be the same for all students. But comfort is essential.2. Training that suits the users abilities and ICT skills - it cannot be "over their head" or treat them as "babies"3. Training based in the tasks to be undertaken - typical assignments, university websites, Windows and mac environments, class scheduling software, all must be explored.4. Training that introduces the most needed elements of the AT system - perhaps those "five buttons" - and expands later.5. Training via Alternative Formats - drop-in facilities, Just-in-Time support by phone, forum, or website, Notes that annotate the training, Quick system crib sheets and guides (A4 or 8.5x11), Audio/Video/Flash versions, Multimedia CDs or downloads, and screeb grabs (screen shots) made during the training that relate directly to task performance, Camtasia recording.

What I said at CSUN: (this is very long, but I wanted to offer most of the PowerPoint here - feel free to skip down to other topics - Firefox Speaks in Simple Terms, Creating an Accessible University, the IntelliSwitch and New Software, Accessible School Email and more...)

"Toolbelt Theory" suggests that we must teach our students how to analyze tasks, the task-completion environment, their own skills and capabilities, an appropriate range of available tools… and let them begin to make their own decisions

Services to those labeled "disabled" are far too often presented as "gifts from concerned people," the style is, of course, medical, with evaluations, and prescriptions, and implementations set up by professionals. None of this builds independence. None of this builds life skills. None of this prepares students for life after school. And, truly, none of it is realistic because it all pretends that one defined, professionally chosen, solution will solve all of a person's needs forever. And, obviously, that is as ridiculous as it sounds.

Toolbelt Theory is based in the concept that students must learn to assemble their own readily available collection of life solutions. They must learn to choose and use these solutions appropriately, based in the task to be performed, the environment in which they find themselves, their skills and capabilities at that time, and the ever-changing universe of high and low-tech solutions and supports. After all, few of us have a toolbox with just one screwdriver, or just the tools we were given when we were ten-years-old.

So, the Toolbelt is designed to:

•Break the dependence cycle

•Develop lifespan technology skills

•Limit limitations

•Empower student decision making

•Prepare students for life beyond school

Students are taught aspecifically ordered version of Joy Zabala's SETT Framework (Skills, Environment, Tools, Tasks). Specifically ordered because, in human experience, the choice of tools is always task-dependent. At the most basic, I need to know if I need to cut wood or join it before I start looking for a tool to use. Environment is next because it makes a huge difference whether I am cutting the wood in my garage or in a forest and whether I am cutting the wood to burn or use in a cabinet. Then, I need to know my skills – Am I strong? Am I exhausted? Is my right hand broken? Am I simply a danger to myself and others with power tools? And finally, once I know all of that, I need to know which tools exist – if I have never seen a chainsaw, as many dyslexic students (for example) have never seen a good digital reader, I will spend long hours hacking ineffectively with an axe.

So, SETT is re-conceived as TEST:

Task

1. What needs to be done? (when possible, break the task down into component parts)

Environment

1. Where must this be done (or is typically done)?

2. Under what time constraints?

3. What is the standard method of task completion?

4. How does the person with the disability interact within this environment?

5. Who is the task being done for? (specifics of teacher, employer, other expectations)

Skills

1. What specific strengths does the person with the disability bring to this task?

2. What specific weaknesses interfere with that person's ability to complete the task?

3. What is that person's "tool acquisition aptitude" and what tools are they currently comfortable with?

Tools

1. What tool best "bridges the gap" between the current skill set and what is needed for task completion?

2. If the tool is not already "in the toolbox" (the person has been successfully trained in its use), how does the environmental timeline match with the needed learning curve?

3. If it is not possible to use the "best tool" within this environment what is the "back-up tool"? How do we pre-train so the best tool can be used the next time?

But, we cannot just implement this in our schools right now, because our schools are unprepared. Essential things must be in place to do this effectively:

•Up to date technology

•Schools can not continue to prepare students to use 20th Century technology

•They must be preparing students to use the technology that will be around in the next decade.

Start by asking: is the technology in your school…

•Up to that used in most major retail stores?

•Up to that used in most offices?

•Ubiquitous technology

•Specialized technology is always more expensive, and more difficult to use “everywhere”

•The mobile phone, the PocketPC, Google-based solutions, Microsoft-based solutions, Firefox-based solutions, are less expensive and everywhere at the start.

This is just like the tale of those horrible old cassette players Telex made for RFBD (US: Reading for the Blind and Dyslexic), etc. You felt like an idiot being seen with one, you couldn't use them in the car, you had to lug your own equipment around. Using regular cassettes would have been superior in every way. This is why I dislike the whole idea of Daisy Books – and other proprietary formats. I want plain text I can use the way I want to, wherever I am…

Start by asking: Does your school…

•Ban mobile phones?

•Ban mp3 players even when students are working individually?

•Have all available free Assistive Technology installed on all computers?

Why is school, especially in the US, the least technologically equipped environment many of your students will be in all day? Why does school actually prevent students from developing their own – perfectly reasonable – solutions… such as baseball caps which focus attention and keep your eyes away from flickering fluorescent lights?

•Choices of hardware and software readily available

•Students must make their own selections and learn how to evaluate

•Start small at young ages, and move up to discovering the world

Why are you forcing your students to use one absurd, antiquated, non-ergonomic keyboard when there, literally, thousands of choices available – couldn't you have at least a dozen different ones in your school building?

Start by asking: Does your school…

•Have various keyboards and mice for students to choose from?

•Have more than one form of literacy technology?

•Encourage a choice of calculators?

•Willingness to allow failure

•Without failure there is very little learning.

•Make failure “low cost” – learn from the world of video games

•Failure now beats failure later.

Start by asking: Does your school…

•Encourage all students to try differing methods of reading?

•Of writing?

•Have assessment method choices?

•Allow choices of seating?

•Instructional tolerance

•Accepting loss of classroom control

•Accepting that all students will learn their own ways to do things

•Emphasizing “what” instead of “how”

Start by asking: Does it matter…

•“how” a book is “read”?

•“how” a paper is “written”?

•“how” a student “gets to” a math answer if the concept is understood?

Does your school…

•Privilege methods?

Does anyone in your school ever ask a student…

•“What if the computer breaks?”

•“What if the power goes out?”

School often begins with being told that we are "making [our] fives wrong" and ends with being told that our "citations are wrong." In neither case are we necessarily being incomprehensible – the teacher knows that it is a five and knows where the citation is from, but they are only interested in style, not content. (Oh, and the answer to the "computer breaks" question is, "what if your pencil breaks, what if your pen runs out of ink?")

Now, the goal is to empower students to continuously assess their changing needs and the ever changing technological environment that surrounds them, and allow them to build their own toolbelts of appropriate solutions to their life challenges.

The student with reading issues will likely need differing solutions for differing tasks for different instructors. She might watch a video of a Shakespeare play, listen to an audiobook of Joyce, need a simple computer reader with annotation capabilities for textbook reading, use a reading pen for a restaurant menu, and require a high-tech literacy support program for testing.

A student with math issues might require just his mobile phone calculator for work and a downloadable computer graphing calculator for homework, but may need to know to transfer data that he cannot write accurately from the teacher's calculator if that teacher distrusts the technology or suspects cheating whenever high-tech gadgets appear.

A student with writing problems might use speech recognition at home but type fastest using a mobile phone's word prediction for in-school answers.

There is not one answer. Tool choice is based in task, needs, environment, prior knowledge, availability, fashion, a sense of self, and the vagaries of what makes one person comfortable but not another, among many other things.

One AT device for each “issue” is as limiting as would be a toolbox with one saw, one screwdriver, and one crescent wrench.

There is a key final part of this learning, self-feedback, and it must be taught…

Data-Based Decision-Making: In tracking task success students can learn to look at direct results (improved test scores), indirect results (less time required for task completion), and affective indicators (improvements in mood, self-image, stress levels). Students need to be taught that all of these things matter, and will determine what assistive devices they use in the same way it determines their choice of mobile phone or mp3 player.

The target is students prepared for independence and life after school. Ready to make their own data-informed decisions throughout their life as their needs and the world changes. And to do this the roles of those of us in special needs services will change dramatically. We will become less doctors and chemists/pharmacists, and more librarians and advisors and personal trainers.

It will be a big change for all of us, but I firmly believe it is essential if the rights and needs of those with differing capabilities are to be respected and supported.

But to make this work it means students should have Google accounts - in other words - Gmail accounts. And schools are scared of Gmail, because it is personally controlled, not school controlled. School email accounts are typically terrible, badly run, expensive to operate systems. Gmail is free, works perfectly, includes huge storage capacity, instant searching, the best spam filters, and conversation linking. But, oh yeah, that control thing...

And now... Google Apps for Education lets schools have control, and their domain address, and control, all with Google features. (see the story of Arizona State and Google). You can see all the options starting here. Now, your school can offer accessibility, allow creativity, and gain workplace dependability, all through the world's best email and on-line office suite.

22 March 2007

The session that described the Accessible Technology Initiative in the California State University system was rough to listen to - university administrators being so fond of acronyms that most of their speech is incomprehensible to anyone outside the system - but it was well worth listening to because every school, at every level, in every nation, will need to perform similar tasks very soon.

The ATI begins with building an understanding of both the law and of best practice. That's fairly radical on (at least) US campuses, where the legalities of Section 508 or the Americans with Disabilities Act (much less international accessibility expectations) too often seem like annoying infringements on academic freedom and creativity.

And then it sets out specific annual deadlines and reporting requirements for the 23 campuses in the system (ranging from the 800 student Maritime College to the 36,000 student Cal State - Fullerton, and including, of course, CSUN) in a series of areas:1. Web Accessibility2. Instructional Materials3. Technology Procurement4. Library Materialswith the goal of a Universal Design-based system fully in place and operating by 2012. Responsibility is placed on the Campus Presidents, accountability to the plan seems built-in.

The project has a couple of goals right-away, including the mantra, "Repair to the Law, Design to Best Practice," and is designed to create collaboration, not just among campuses to eliminate unneeded duplication of effort, but also between "Communities of Practice," tech staffs, faculty, and students, to best define "what works" and police progress.

Students will be frustrated by the "top down" timetable. After all, it is the campus workstations and course content inaccessibility that frustrate students every day. But it is a logical way to progress, especially when the need to train faculty, university staff, and university technology departments is so massive (I can testify to the number of "Special Education" faculty members who hand out inaccessible PDFs for course readings or who pick out textbooks far too late for students to get alternate formats in time), and accessible workstations need accessible servers, and accessible course materials do need to exist on accessible course software.

CSU, as one of the largest universities in the world (over 410,000 students) is doing good work here, for themselves, for the world - since publishers and digital journal suppliers must respond to a system like this. They are producing excellent ideas and resources along the way, and their experience will provide a fabulous case study in creating the universally designed university.

I was tired and grumpy by the time I reached this booth, and I challenged the vendor rep, "what makes this "truly universal access"?" I asked, pointing to the big sign behind him. He said, "come look," and I watched Madentec's IntelliSwitch go to work with their DiscoverPro2.0 software (PC) and their DiscoverEnvoy v.1.0 software (Mac OS Tiger). It was impressive. The "two-switch" IntelliSwitch has always had great capabilities, but combined with these software systems it makes this a great solution for those with high cognitive and low dexterity function. On both platforms keyboard and mouse controls work smoothly, and navigation is intuitive. On the Mac, with Envoy, it goes beyond that, running cleanly and beautifully, controlling every computer function from start-up to shut-down and in-between.

DiscoverEnvoy even runs in Parallels, the Windows-on-an-Intel/Mac system, though this system is still "buggy."

There are many alternative ways to control computers, from voice, to eye-gaze, to brainwaves, to the headmouse (some of which I will mention in days to come). The switch is an old system, but with these new (or in the case of DiscoverPro, updated) software packages, it stays as a very viable option to consider.

21 March 2007

FoxyVoice was the brilliant Firefox extension that made text-to-speech simply instantaneous for everyone, including young and LD kids. You just highlighted text, clicked on a smiley face in the lower right corner of the screen, and whatever you had highlighted was read to you.

But, alas, no one updated FoxyVoice post Firefox 1.0.7, so you could keep an old browser (if still a great one), or update to Firefox 2.x and depend the vastly more sophisticated, but vastly more complicated to use, FireVox.

"You bitched about this last year," Charles L. Chen of CLC and Mozilla told me, "well, here's your solution." He slapped a CD into my hand and pulled up a website on his computer. There, in the upper left corner, were three strange little icons - one white, one green, one red. Put the cursor into any word, sentence, or paragraph, and click. If you click the white icon it will read that word (or a section you have highlighted), perfect for any student who needs a single word sounded out for themselves. Click the green icon and the page will be read from the paragraph you have clicked into. It reads and scrolls the page. Click red (of course) to stop.

Let me quote a very recent review from Accessify: "Unlike Fire Vox which is designed for visually impaired users, CLiCk, Speak is designed for sighted users who want text-to-speech functionality. It doesn’t identify elements or announce events - two features that are very important for visually impaired users but very annoying for sighted users."If you’re a sighted user who wants to have web pages read to you because you have cognitive issues (for example, dyslexia), because you have literacy issues (like me - I can understand spoken Mandarin Chinese just fine, but reading is difficult for me), because you want to reduce eyestrain and listen to a web page being read, etc., then you are likely to prefer CLiCk, Speak over Fire Vox."