Posted
by
Soulskill
on Monday January 23, 2012 @04:30PM
from the distributing-bits-of-knowledge dept.

mikejuk writes "Professor Sebastian Thrun has given up his Stanford position to start Udacity — an online educational venture. Udacity's first two free courses are Building a Search Engine and Programming a Robotic Car. In a moving speech at the Digital Life Design conference, he explained that after presenting the online AI course to thousands of students he could no longer teach at Stanford: 'Now that I saw the true power of education, there is no turning back. It's like a drug. I won't be able to teach 200 students again, in a conventional classroom setting.' Let's hope Udacity works out; Stanford is a tough act to follow."

When lectures can be saved to a video format on the Internet, why pay the teacher to deliver the same lecture every year?

If a video of a lecture is as useful as the live lecture, it's a bad lecture.

When books can be copied for free, why pay 200$ for a physical version of the book?

If all of the distributed copies are free, I'm thinking the major problem is going to be finding people to write and edit them. Don't get me wrong, there are some older math texts you could probably use for ages, but that will only get you so far.

When lectures can be saved to a video format on the Internet, why pay the teacher to deliver the same lecture every year?

If a video of a lecture is as useful as the live lecture, it's a bad lecture.

If seeing the lecture online is only as good as seeing it live, then it is a bad web site. Online, you can put additional content, have links that go to the exact point in the video where a question is answered, break up the video into topics so that students can spend more time on topics that are most relevant to them. You can also have more interactive tools and such.

If a video of a lecture is as useful as the live lecture, it's a bad lecture.

I'd be careful with that statement. If you claim there must be some interaction, then let's get real: you don't want to be interrupted by questions every 15 seconds. So live questioning as a feedback from students to the lecturer is out. Then the most interaction you'll get is the lecturer looking at faces and body language of students.

But what does that tell the lecturer? Nothing that's very applicable when the medium is video!! In a video lecture, if you feel like falling asleep, you pause it, get up, walk around, come back refreshed, start watching again a few minutes back into the recording to get back on topic. If you need to look something up, you can pause, google for it, look in a book, look in previous lectures, then resume when you're ready. Those two situations cover most of the realtime feedback a lecturer would use, I'd presume. So, failing to show particular examples of how the reverse channel helps in a prerecorded lecture, I call your claim an gross exaggeration at best. Audience feedback is important in a live lecture setting, recorded lectures are really quite different because the student controls the playback. Good luck pausing the professor when you feel like dozing off for 45 minutes in the auditorium:)

If they do as mentioned above and use the class as a way to drum up interest from companies that want to recruit the best students then it can pay for itself via finders fees. That would be a great way to subsidize education.

I find it interesting though, that Sebastian Thrun gets so much attention, and Andrew Ng for example, gets no mention. I think that Ng poured in a tremendous amount of effort to teach an absolutely outstanding class with far more structured and well-developed content.

Don't get me wrong, Thrun is an enthusiastic and obviously knowledgeable individual, but having followed both AI and ML classes, I was of the opinion that Andrew Ng was the better teacher. Thrun needs to improve his teaching skills, so that he can impart his great store of knowledge better to students. Although that is my personal opinion, I think you might find that it is backed by some evidence, if you were to trawl through the comments on the respective forums of the AI and ML classes. Overall, both of them + Peter Norvig and the rest of their teams, made fantastic contributions, and that should be recognized equally, whenever possible!

Oh yes. I took all three classes (AI, DB and ML), and AI was plagued with ambiguity, handwaving and mistakes caused by sloppiness. Some homework questions had so many errors that they were unsolvable and had to be amended several times shortly before the deadline, of course without telling the students who had already submitted their answers. Check out the old discussions on aiqus.com, it was horrible. The AI class was also the one with no practical exercises at all, except for a tacked-on codebreaking exercise after the final exam that was neither graded nor properly discussed afterwards. The AI class server software did not include video streaming (they relied on Youtube instead) nor the promised forum. In contrast, both Andrew Ng's Machine Learning class and Jennifer Widom's Database class were hands on and thoroughly prepared. I learned a lot more there. I think the problem with Thrun and Norvig was their attitude, probably not unrelated to being employed by both Stanford and Google. They expected to be venerated as superstars. They seemed to think they could pull this off without much effort.