learning communications blog

eLearning is a misunderstood domain. Basically there are two types of companies in the space: those that supply technology and those that produce content. The latter often sell more than finished content - by producing webcasts, podcasts and screencasts, they facilitate communications.

Sunday, March 21, 2010

The future of TV was discussed at South by Southwest conference last week, and this has been reviewed by top geeks on the Gillmor Gang show. The emphasis of discussion is how the social networks integrate with television experiences, and iPAD as a TV platform. The geeks predict that at the next SXSW event will be focused on the interplay of television and social networks. It's a feature length video podcast, but worth checking out.

Steve Gillmor has followed Leo Laporte in moving tech podcasts forward to embrace video. If you watch this show, there are quite a few 'over-the-shoulder' shots of the technical setup which is based on the NewTek Tricaster. Very impressive use of low-cost video production technology.

Sunday, January 03, 2010

Just read a great list - it's about things to do to start the 2010 New Year off right. I'm not usually a fan of lists, but this one is from photographer Chase Jarvis, and I can relate to it as a videographer myself.

The idea that prompted this post: "Do the thing on your list that you most dread doing. Call that client who hasn't paid. Sign up for Twitter. Develop a marketing plan." Well, I'm already on Twitter - maybe that's why I haven't posted here in months. But I especially like this one: "Remind yourself that the gear you can't afford is not the barrier keeping you from success."

Monday, July 27, 2009

A European study on "How People are using Twitter during Conferences" has demonstrated that participants use Twitter to enhance realtime learning. Although the sample size was small, and covered only 5 conferences, the researchers found that the majority of conference attendees already had a Twitter account (95.1%) and many of those who did actively used it to tweet during the conference (67.5%).

The most interesting insight was that nearly half the tweets were simple plain text messages while tweets with links to web sites only accounted for 10% of the messages. In other words, the Twitterers were using the medium to share the information they were learning at the present moment as opposed to posting links to information already available on the web.

Saturday, July 18, 2009

Walter Cronkite narrated for me and my generation the Kennedy assasination, the Civil Rights marches, the Vietnam War and the Watergate Hearings. But I especially remember his inspired commentary throughout the many 'space shots' of the Mercury, Gemini and Apollo programmes. He narrated the historic moon landing coverage on 20 July 1969 (nearly 40 years ago today), which was followed by audiences around the world.

Walter Cronkite has passed away at 92 years of age. He has been called the 'most trusted voice in America' and was remembered today by the folks at NASA Mission Control, as follows:

"It is with great sadness that the NASA family learned of Walter Cronkite's passing. He led the transition from print and radio reporting to the juggernaut that became television journalism. His insight and integrity were unparalleled, and his compassion helped America make it through some of the most tragic and trying times of the 20th century."

"From the earliest days of the space program, Walter brought the excitement, the drama and the achievements of space flight directly into our homes. But it was the conquest of the moon in the late 1960s that energized Walter most about exploration. He called it the most important feat of all time and said that the success of Apollo 11 would be remembered 500 years from now as humanity's greatest achievement."

You can hear and see Walter describe the moon landing in this short clip from the series Walter Cronkite Remembers the 20th Century. It brings me goosebumps even now.

Monday, June 08, 2009

I've often complained about the poor communications support in Facebook. All your friends are there, but you can't really have a conversation except sending messages via email and having a teeny tiny little chat.

Enter Google Wave (http://wave.google.com), a next generation web communications tool that is a mashup of email, instant messaging, forums and wikis. Probably until you see the demo, you won't get it - but once you see it - you'll understand how this platform is going to transform communications. It's a long demo, but after you see about 10 minutes, you'll be convinced.

I first learned about Wave from the ACOR Cancer support group (http://listserv.acor.org) which is participating in a trial of the software. I happen to be a list moderator there. Anyway, ACOR uses an ancient email exchange protocol known as LISTSERV, so it's easy to grasp how their members could benefit from Wave. But imagine if this platform was released into an academic setting!

I feel very confident that email will be relegated to backward-compatible communications, much like fax was once email came along. And the editors at TechCrunch agree with me. The future is Wave.

Friday, March 13, 2009

Elliot Masie's Learning Trends (focus is corporate training)Elliott Masie is an internationally recognized futurist, analyst, researcher and organizer on the critical topics of workforce learning, business collaboration and emerging technologies. He is the editor of Learning TRENDS by Elliott Masie, an Internet newsletter read by over 52,000 business executives worldwide, and a regular columnist in professional publications. He is the author of a dozen books, and is the convener of Learning 2009.- http://www.masie.com

EdTech Talk (focus is educational technology)EdTechTalk is a community of educators interested in discussing and learning about the uses of educational technology. The team webcasts several live shows each week.- http://www.edtechtalk.com

Opencast Project (focus is video in education)The Opencast community is a collaboration of higher education institutions working together to explore, define, and document podcasting best practices and technologies.- http://www.opencastproject.org

Future of Education (focus is academic teaching methods)This community is devoted to providing an opportunity for those who care about education to share their voices and ideas with others.- http://www.futureofeducation.com

There is a lot of sharing happening in these communities about what technologies do and don't work in education, whether corporate or academic, and it's free to you and me.

Saturday, November 29, 2008

It is very sad for an Indiaphile like me to see the Taj Hotel burning - if you've been there, I'm sure you feel the same. I regret the innocent loss of life, and seeing westerners gunned down in places that I have myself visited. And I am apalled seeing urban warfare, with citizens running in terror, live on the Internet.

For the past two days, Internet social media has allowed me to follow the Mumbai Siege in a breathtaking torrent of information. News reports came in via Twitter with much more immediacy than the mainstream media. The urgency and passion of reports from the front lines was evident - though sometimes the 'facts' were partially or even entirely wrong.

But images rarely lie, and I had direct access to live video feeds from Mumbai via NDTV and IBNLive. I could only fault them when they replayed scenes under the banner of 'live news', which they later claimed meant that the audio was live and the video was blocked to prevent compromising anti-terrorist actions in progress.

India has to get much more serious about stopping terrorism, and Pakistan too if anyone can take them seriously. This siege has been a wakeup call for both Indian and Pakistani governments. In the meantime, we should all support Mumbai in whatever way possible.

Thursday, November 13, 2008

One year ago, I was scouting for ideas to develop a new conference format to replace the iX Conference event I had been helping to organise for several years. iX is a thought-leadership conference focused on Web 2.0 trends and technologies, modeled on the successful O'Reilly conferences. The annual iX Conference is organised by Singapore infocomm Technology Federation (SiTF), and up to 2007 my role had been as chair or co-chair of the organising committee.

For 2008, I was interested in developing a digital media event that would combine a film festival screening and an IT thought-leadership conference, to explore business opportunites in the converged technology & content space. I travelled to Vancouver to see the Vidfest event, and I was impressed at how they combined games, mobile and a showcase of made-for-Internet films.

So was born the Singapore Digital Media Festival, organised by the Digital Media Chapter of SiTF. The inagural DMfest was was held about 2 weeks ago. Did it live up to expectations, and does the format work? Well, we made a bit of money for SiTF, although attendance was not as high as I had anticipated. We planned for up to 500 but really it was quite a bit below that. It was interesting that the festival screening drew a different audience than the conference - perhaps only half those attending the conference had attended the film and video programme the evening before. But overall the programme was very well received and we had great feedback from sponsors, speakers, media, bloggers, and ordinary attendees.

The conference sessions were all recorded, and have been published as webcasts, so you can be your own judge. The theme of the event was 'Television 2.0 - Internet Services and New Media Mashup', and we had filmmakers as well as technologists onhand to address the creative, distribution and community issues.

The evening before the conference we screened about 3 hours of made-for-Internet (MFI) film and video programmes in HD format. You can get some flavour of this screening by scanning the online previews. The highlight of the screening (and the entire festival) was no doubt the live linkup with Mark Schubin, Chief Engineer of the Metropolitan Opera Live in HD. The Met has been pioneering delivery of operas live in HD to cinemas throughout the world, and Mark was there (virtually) to tell us about the making of these shows.

Speaking of 'making of' stories... the live linkup was a story in itself. We ran the connection from Manhatten School of Music to the National Museum Theatre over the IPv6 network, commonly known as Internet 2. This is a next-generation network, which allowed us to transmit in HD quality (at about 4 megabits/second) for projection of the interview on the theatre screen. Many parties helped out, including Singaren, Mediacorp, NETe2 Asia and Singtel. The 'last mile' was a fibre optic cable laid from the telecom riser and snaking several hundred meters along the museum's back corridors into the projection booth. For safety, we shot a second interview with Mark in New York's Central Park a few days ahead, and couriered the tape to Singapore. It arrived just an hour before the show began.

We faced a few key challenges staging such an ambitious event. Foremost among these was the short lead time, and the need to develop our DMfest brand to engage potential attendees and sponsors. I say short lead time, because it wasn't until early September (just 60 days before the event) that we had support commitments from government agencies and sponsors. We began the branding exercise with a good domain name and a wonderful logo (see above) developed by Nicholas Ang, a second-year student from Ngee Ann Polytechnic's School of Infocomm Technology.

Next we developed a wiki and encouraged contributions from a list of 30 'organisers'. We also developed a formal organising committee that met every two weeks. We appointed IDC to perform marketing and event management, and Text100 to develop a publicity campaign using social media. We contracted Veron Ang (of Sparklette fame) to develop the website and Tan Ee Sze to do the writing. We formed an OPS team that met weekly in the final lead-up to the event.

The programming was done mostly by me, and I learned a lot about the relationship of animation, machinima and virtual sets in the Internet film creation process. Speaker selections were pretty straight-forward, but curating the festival screening was a new challenge for me. For inspiration and guidance I must thank San Francisco artist Justin Hoover and Singapore Film Festival director Philip Cheah.

We wanted the festival to incorporate mobile and games platforms, but we faced a challenge in figuring out how to showcase made-for-mobile (MFM) content. This was addressed by Billy Fong of VHQ Post, who organised the mobile content showcase. He obtained loans of handsets from Motorola and Nokia, tethered them to tables, hired cute sales promoters to demo the content and produced movie-posters to give visitors an instant impression of what was on offer. Similarly, Aroon Tan, MD and co-founder of Magma Studios, ably organised a games showcase that was situated in a living room environment. Like the rest of the organising committee, Billy and Aroon were volunteers.

Topping it all off, we organised a mini-exhibition for the benefit of sponsors. Each got a small tabletop to show off their digital media solutions and services, positioned in the main foyer to guarantee maximum traffic. We also obtained a donation of thumbdrives from HP Storageworks and produced a content folio with a variety of informational and marketing materials organised as a wiki (using the free tool TiddlyWiki). These were offered to every delegate.

In the conference itself, we gave the best seats to a set of invited bloggers, who were pre-selected and briefed by Text100. Communications were facilitated by Ram Srinivasan, a volunteer who operated a Campfire chat and a Twitter feed, which were projected on plasma display screens located throughout the ballroom. Ram researched during the speaker presentations, and posted the results of his digging (eg- wikipedia articles on micropayment, or stories about the making of a particular Internet film). Thus, the audience was engaged in several realtime channels, and could SMS their feedback or comment via the Campfire chat. This worked great and would have been even more compelling if the audience had been bigger.

The keynote sessions were all excellent. Iolo Jones took the opportunity to launch his new product VidZapper at the conference. Hugh Hancock championed the 'guerilla showrunner' (ie- making MFI films on a low budget) and Timo Vuorensola made an excellent case for collaboration on MFI films using the model of open source software development.

As a programmer, I was a bit concerned that the panel sessions covered too much ground. The distribution panel addressed News 2.0, micro-payments, mobile content and video production tools - quite a lot of ground to cover in less than an hour. But there was something for everyone and the programme flow was easy enough to follow. The audience actually increased througout the day, rather than tapering off as in most events. Everyone said they were having a good time, including Mediacorp celebrity newscaster Genevieve Woo and MDA Deputy CEO Michael Yap, and most attendees stayed for the chill-out reception.

I think our organising committee Chair Ivan Ho did a great job making this whole event come together, not least by focusing on the financial bottom line. He and our Digital Media Chapter Chairman Ng Chong Khim were principally responsible for securing all the sponsors, and thus making my job much easier.

If we do this again next year, as has been the intention all along, we will have more leadtime and a recognised brand. I think DMfest 2008 has established a great foundation for an annual event to bring together the IT vendor and media production communities in Singapore.

Monday, November 10, 2008

After much frustration these past years trying to remember which account goes with which login identity, I recently made a determined effort to consolidate my online identity. Now, every account has the same identity 'wmclaxton', rather than some hyphenated or reverse order version of my name. Consider the elegance of this:

Not easy for everyone, I know. How do you find a unique name suitable for all the services? Lucky I guess - I managed to find a rather unique way to signify my name that no one else seems to be using (for now). W and M are my first and middle initials, and together they are a short form for William.

Did I lose anything? Not really. It was a challenge to migrate some of the identities (notably Flickr), and Facebook still requires an account ID as part of the URL. I had to export my Skype contacts and lost the groups when re-importing them. I orphaned some photo sets on Flickr, and had to re-invite my Flickr contacts. Google, Skype, Twitter and Linked In were a breeze.

Plaxo and Google have tried to offer consolidated identity management, but I've never trusted Plaxo, and hey - this approach works for me. My social network is now just a bit easier to manage.

Saturday, August 02, 2008

Bertrand Serlet, Senior Vice President of Software Engineering at Apple, has filed a patent application on a next-generation podcasting solution. It has some intriguing features such as automated switching between the video and display graphics, perhaps using a pointer device to detect when attention should be focused on the display graphics.

But I don't think Serlet's idea represents a significant innovation in podcasting technology - it is generally about presentation recording and specifically intelligent switching technology. And as for presentation recording, Serlet's idea is not nearly as impactful as Panopto's innovations, which are available today.

The factor that makes Panopto so different from what Apple is proposing is the multi-stream synchronisation. Apple's idea focuses on the intelligent switching problem from the perspective of producing one final output video stream. This has its limits. If you look at say TED Talks, you can see that no matter how good the switching between video and display graphics, there is still a sense that the recorded presentation is playing catchup with what the audience is able to see.

On the other hand, Panopto's approach inherently scales to synchronisation of many data inputs which might include chats, instant messaging, Twitter feeds, pointer coordinates, assessment data, audience response data, and of course audio, video and display graphics. In fact, one of the feedbacks we see from clients is delight that they can use Panopto to record from two screens at once (eg- PowerPoints and Bloomberg terminal). The final output is not a merged and flattened video file, but a set of separate standards-based data streams which can be rendered in a variety of ways using player skins. This is way better than a flattened video file, and is more flexible to device and bandwidth constraints.

Apple's useful innovation in this patent, if they can do it, is to provide a set of pointer data that can be used for switching. My friend Peter Du and I were noticing that presenters are so comfortable using laser pointers, but that this data is lost during a recorded presentation. He suggested we design a tool to sense the location of the pointer beam relative to the screen, and use it to drive the cursor in realtime. That would be a way to capture the gestures of speakers that prefer laser pointers.

Some work has been done in this area by Johnny Lee. Lee is really an interesting guy with great educational technology projects including the Wiimote (using Wii as a laser pointer). Check out his Wii projects page. One possible application would be a classroom tracking camera which follows the presenter.

Coming back to the switching problem highlighted by Apple's patent, I think the real switching problem is between cameras - TED Talks have at least 2 cameras, usually 3. Say that one camera is wide and another is tight, how do you control the PTZ and switch between them automatically? Audio sensing has too much lag, motion detection is prone to extraneous inputs (eg- a member of the audience walks in front of the presenter and the camera follows him/her).

The best approach I've seen is similar to what Lee proposes - using an IR badge or reflective tape worn by the presenter, which the wide camera locates within the 'stage area'. It then relays the location data to a second camera so that it can PTZ for tight shot on that target. This is smart and reliable way to follow the presenter. You can switch cameras as suggested by Apple's patent application, eg- if the presenter is using the keyboard, or mouse, or has his back to the camera or is using the pointer, switch to the wide shot. Overlay that with a voice sensing or push-to-talk audio subsystem so that a third or fourth camera can zoom in when someone asks questions, and you have a fully automated presentation recording system.

Finally, Apple's patent application leaves me wondering if there is something inherent in podcasting technology that limits delivery to a single stream. I'm sure you can put an XML descriptor of a synchronised presentation into an RSS 2.0 enclosure, but I don't think today's players would know how to fetch it for local storage? Presumably an iPod, eBook Reader or phone wouldn't know what to do with it either.