Category Archives: portfolio

Oops, I did it again. I’ve now managed to complete another MOOC. Bringing my completion rate of to a grand total of 3 (the non completion number is quite a bit higher but more on that later). And I now have 6 badges from #oldsmooc and a certificate (or “statement of accomplishment”) from Coursera.

My #oldsmooc badges

Screenshot of Coursera record of achievement

But what do they actually mean? How, if ever, will/can I use these newly gained “achievements”?

Success and how it is measured continues to be one of the “known unknowns” for MOOCs. Debate (hype) on success is heightened by the now recognised and recorded high drop out rates. If “only” 3,000 registered users complete a MOOC then it must be failing, mustn’t it? If you don’t get the certificate/badge/whatever then you have failed. Well in one sense that might be true – if you take completion to equate with success. For a movement that is supposed to be revolutionising the (HE) system, the initial metrics some of the big xMOOCs are measuring and being measured by are pretty traditional. Some of the best known success of recent years have been college “drop outs’, so why not embrace that difference and the flexibility that MOOCs offer learners?

Well possibly because doing really new things and introducing new educational metrics is hard and even harder to sell to venture capitalists, who don’t really understand what is “broken” with education. Even for those who supposedly do understand education e.g. governments find any change to educational metrics (and in particular assessments) really hard to implement. In the UK we have recent examples of this with Michael Gove’s proposed changes to GSCEs and in Scotland the introduction of the Curriculum for Excellence has been a pretty fraught affair over the last five years.

At the recent #unitemooc seminar at Newcastle, Suzanne Hardy told us how “empowered” she felt by not submitting a final digital artefact for assessment. I suspect she was not alone. Suzanne is confident enough in her own ability not to need a certificate to validate her experience of participating in the course. Again I suspect she is not alone. From my own experience I have found it incredibly liberating to be able to sign up for courses at no risk (cost) and then equally have no guilt about dropping out. It would mark a significant sea change if there was widespread recognition that not completing a course didn’t automatically equate with failure.

I’ve spoken to a number of people in recent weeks about their experiences of #oldsmooc and #edcmooc and many of them have in their own words “given up”. But as discussion has gone on it is apparent that they have all gained something from even cursory participation either in terms of their own thinking about possible involvement in running a MOOC like course, or about realising that although MOOCs are free there is still the same time commitment required as with a paid course.

Of course I am very fortunate that I work and mix with a pretty well educated bunch of people, who are in the main part really interested in education, and are all well educated with all the recognised achievements of a traditional education. They are also digital literate and confident enough to navigate through the massive online social element of MOOCs, and they probably don’t need any more validation of their educational worth.

But what about everyone else? How do you start to make sense of the badges, certificates you may or may not collect? How can you control the way that you show these to potential employers/Universities as part of any application? Will they mean anything to those not familiar with MOOCs – which is actually the vast majority of the population. I know there are some developments in California in terms of trying to get some MOOCs accredited into the formal education system – but it’s very early stages.

Again based on my own experience, I was quite strategic in terms of the #edcmooc, I wrote a reflective blog post for each week which I was then able to incorporate into my final artefact. But actually the blog posts were of much more value to me than the final submission or indeed the certificate (tho I do like the spacemen). I have seem an upward trend in my readership, and more importantly I have had lots of comments, and ping backs. I’ve been able to combine the experience with my own practice.

Again I’m very fortunate in being able to do this. In so many ways my blog is my portfolio. Which brings me a very convoluted way to my point in this post. All this MOOC-ery has really started me thinking about e-portfolios. I don’t want to use the default Coursera profile page (partly because it does show the course I have taken and “not received a certificate” for) but more importantly it doesn’t allow me to incorporate other non Coursera courses, or my newly acquired badges. I want to control how I present myself. This relates quite a lot to some of the thoughts I’ve had about using Cloudworks and my own educational data. Ultimately I think what I’ve been alluding to there is also the development of a user controlled e-portfolio.

So I’m off to think a bit more about that for the #lak13 MOOC. Then Lorna Campbell is going to start my MOOC de-programming schedule. I hope to be MOOC free by Christmas.

Following on from yesterday’s post, another “thought bomb” that has been running around my brain is something far closer to the core of Audrey’s “who owns your educational data?” presentation. Audrey was advocating the need for student owned personal data lockers (see screen shot below). This idea also chimes with the work of the Tin Can API project, and closer to home in the UK the MiData project. The latter is more concerned with more generic data around utility, mobile phone usage than educational data, but the data locker concept is key there too.

Screen shot of Personal Education Data Locker (Audrey Watters)

As you will know dear reader, I have turned into something of a MOOC-aholic of late. I am becoming increasingly interested in how I can make sense of my data, network connections in and across the courses I’m participating in and, of course, how I can access and use the data I’m creating in and across these “open” courses.

I’m currently not very active member of the current LAK13 learning analytics MOOC, but the first activity for the course is, I hope, going to help me frame some of the issues I’ve been thinking about in relation to my educational data and in turn my personal learning analytics.

Using the framework for the first assignment/task for LAK13, this is what I am going to try and do.

1. What do you want to do/understand better/solve?

I want to compare what data about my learning activity I can access across 3 different MOOC courses and the online spaces I have interacted in on each and see if I can identify any potentially meaningful patterns, networks which would help me reflective and understand better, my learning experiences. I also want to explore see how/if learning analytics approaches could help me in terms of contributing to my personal learning environment (PLE) in relation to MOOCs, and if it is possible to illustrate the different “success” measures from each course provider in a coherent way.

2. Defining the context: what is it that you want to solve or do? Who are the people that are involved? What are social implications? Cultural?

I want to see how/if I can aggregate my data from several MOOCs in a coherent open space and see what learning analytics approaches can be of help to a learner in terms of contextualising their educational experiences across a range of platforms.

This is mainly an experiment using myself and my data. I’m hoping that it might start to raise issues from the learner’s perspective which could have implications for course design, access to data, and thoughts around student created and owned eportfolios/and or data lockers.

3. Brainstorm ideas/challenges around your problem/opportunity. How could you solve it? What are the most important variables?

I’ve already done some initial brain storming around using SNA techniques to visualise networks and connections in the Cloudworks site which the OLDS MOOC uses. Tony Hirst has (as ever) pointed the way to some further exploration. And I’ll be following up on Martin Hawksey’s recent post about discussion group data collection .

I’m not entirely sure about the most important variables just now, but one challenge I see is actually finding myself/my data in a potentially huge data set and finding useful ways to contextualise me using those data sets.

4. Explore potential data sources. Will you have problems accessing the data? What is the shape of the data (reasonably clean? or a mess of log files that span different systems and will require time and effort to clean/integrate?) Will the data be sufficient in scope to address the problem/opportunity that you are investigating?

The main issue I see just now is going to be collecting data but I believe there some data that I can access about each MOOC. The MOOCs I have in mind are primarily #edc (coursera) and #oldsmooc (OU). One seems to be far more open in terms of potential data access points than the other.

There will be some cleaning of data required but I’m hoping I can “stand on the shoulders of giants” and re-use some google spreadsheet goodness from Martin.

I’m fairly confident that there will be enough data for me to at least understand the problems around the challenges for letting learners try and make sense of their data more.

5. Consider the aspects of the problem/opportunity that are beyond the scope of analytics. How will your analytics model respond to these analytics blind spots?

This project is far wider than just analytics as it will hopefully help me to make some more sense of the potential for analytics to help me as a learner make sense and share my learning experiences in one place that I chose. Already I see Coursera for example trying to model my interactions on their courses into a space they have designed – and I don’t really like that.

I’m thinking much more about personal aggregation points/ sources than the creation of actual data locker. However it maybe that some existing eportfolio systems could provide the basis for that.

The JISC e-Learning Programme team has just announced the release of five new publications on the themes of lifelong learning, e-portfolio implementation, innovation in further education, digital literacies, and extending the learning environment. These publications will be of interest to managers and practitioners in further and higher education and work based learning. Three of these publications are supported by additional online resources including videos, podcasts and full length case studies.

Effective Learning in a Digital Age: is an effective practice guide that explores ways in which institutions can respond flexibly to the needs of a broader range of learners and meet the opportunities and challenges presented by lifelong learning.

Enhancing practice: Exploring innovation with technology in further education is a short guide that explores how ten colleges in Scotland, Wales, Northern Ireland (SWaNI) and England are using technology to continue to deliver high-quality learning and achieve efficiency gains despite increasing pressure and reduced budgets.

Developing Digital Literacies: is a briefing paper that provides a snapshot of early outcomes the JISC Developing Digital Literacies Programme and explores a range of emergent themes including graduate employability, and the engagement of students in strategies for developing digital literacies.

Extending the learning environment: is a briefing paper that looks at how institutions can review and develop their existing virtual learning environments. It offers case study examples and explores how systems might be better used to support teaching and learning, improve administrative integration or manage tools, apps and widgets.

All guides are available in PDF, ePub, MOBI and text-only Word formats. Briefing papers are available in PDF.

There are a limited number of printed copies of each guide for colleges and universities to order online.

This week has been designated “activity week” for this year’s JISC Innovating E-learning conference. There are a number of pre conference online activities taking place. I’m delighted to be chairing the “enhancing and creating student centred portfolios in VLEs” webinar, this Friday (18th November at 11am).

The session will demonstrate a number of portfolio centric integrations and widgets being developed as part of the current JISC DVLE Programme from the DOULS, DEVLOP and ceLTIc projects. Below is a short summary of each of the presentations.

DOULS
Portfolio redevelopment at the Open University has focussed on incorporating some of the enhanced functionality available within Google, e.g., a document repository, facilities for sharing, collaboration and reflection. The DOULS project (Distributed Open University Learning Systems) has been tasked with delivering integration between the Moodle learning environment and Google. The presentation by the Open University will focus on these integrations and what it means for the student experience.

DEVELOP
Part of the University of Reading’s DEVELOP Project has been to look at e-portfolio provision and use, and to develop three widgets to assist staff and students in the creation and maintenance of e-portfolios. The University uses Blackboard as its VLE and widgets have been designed using HTML/JavaScript to interact, in pilot studies, with Blackboard’s very basic e-portfolio tool. One widget, now fully developed for Blackboard, enables students to build a portfolio with all the pages as specified by their tutors/lecturers. This widget also guides the user through the various steps needed to share and maintain their portfolio. A feedback widget is at this moment being developed to allow tutors to provide feedback on specific parts of the students’ portfolios while an export widget is planned to allow students to download their portfolio in a standards-compliant form.

ceLTIc
Stephen Vickers from the ceLTIc project will demonstrate how the IMS Learning Tools Interoperability (LTI) specification can offer a simple and effective mechanism for integrating learning tools with a VLE. The session will also illustrate how LTI can enable learners from Moodle and Learn 9 to collaborate together in a shared space within an external tool such as WebPA or Elgg.

Information on registration and how to access the webinar can be found at the conference website, and remember to follow @jiscel11 and the #jiscel11 hashtag on twitter for updates.

It’s not been so much springwatch time as codewatch time for JISC CETIS with our fourth codebash taking place on 7/8th June at the University of Bolton.

As in previous events the ‘bash’ focused mainly on content related activities concerning IMS Content Packaging and QTI. However there were a number of extended conversations surrounding various e-portfolio issues. The Portfolio SIG held a co-located meeting at the University on the second day of the codebash.

Thanks to our Dutch colleagues at SURF we were able to provide remote access to the event through the use of their macromedia breeze system. We had about 15 remote participants including a large Scandinavian contingent organised through Tore Hoel from the Norwegian eStandards project. Tore also hosted a face to face meeting on day two of the bash.

Day one began with a series of presentations giving updates on IMS Content Packaging, QTI and SCORM. Although it may well seem that content packaging is ‘done and dusted’ there are still some issues that need resolved particularly with the imminent release of v1.2 of the specification. Wilbert Kraan outlined the plans the IMS project working group have to develop two profiles (one a quite limited version of widely implemented features and one more general) for the new version of the spec to mixed response. Some people felt there was a danger that providing such profiles could limit creativity and use of the newer features of the specification and create defacto limited implementation. It was agreed that care would have to be taken on the language used to describe the use of any such profiles.

Steve Lay then gave an update on IMS QTI and a useful potted history of the spec’s development stages and the functionality of each release of the specifcation. The IMS working group is currently looking at profiling issues and hopes to have a final release of the latest version of the spec available by early 2008. Angelo Panar from ADL provided the final presentation giving an overview of developments in SCORM and the proposed LETSI initiative to move the governance of SCORM out of ADL and into the wider user community. Angelo also outlined some of the areas he envisaged SCORM would develop such as extending sequencing and consistent user interface issues.

Although smaller than previous ‘bashes’, the general feeling was that this had been a useful event. There’s nothing quite like putting a group of developers in room together and letting them ‘talk technical’ It’s probably fair to say that less bashing of packages took place than in previous events, but some useful testing particularly in relation to QTI did take place between remote and f2f participants. Maybe this was a sign of the success of previous events in that many interoperability issues have been ironed out. It is also probably indicative of the current state of technology use in our community where we are now increasingly moving towards web services and soa approaches. It is likely that the next event we run will focus more on those areas – so if you have any suggestions for such an event, please let us know.

Copies of the presentations and audio recordings are available from the codebash web page. You may also be interested in Pete Johnson’s (Eduserve) take on the event too.