Community

Changes to SPEAK – now with “Focused Speaking”

We are constantly trying to improve the user experience on EnglishCentral. It is a challenge. To balance the need to motivate learners and keep them engaged with the need for sound pedagogic features so learners can learn English effectively and purposefully.

We’ve done that with a new feature, Focused Speaking we are calling it. But it is anything but “light”. Now, when a learner finishes LEARN and studying the words in the video, they can go on and SPEAK only those lines.

This means that if a learner wants to speak all the lines of the video, they will have to go back and select those lines. However, we are going to make it simple and later this week, we’ll have a “Focused Speaking Mode” setting in the settings area of the player. Any learner who wishes to speak the whole video and skip “Focused Speaking”, can do by deselecting.unchecking it.

We think Focused Speaking is fantastic. You can do more videos and really focus your speaking on only the lines with the selected, purposeful vocabulary you are studying. It’s not “light” in the sense that you can study more videos, be more engaged and motivated by the variety of our huge video catalog.

So enjoy Focused Speaking and we welcome your feedback. We’ll be making an announcement for all members once we launch the “Speak The Whole Video” in the settings.

Like this:

Comments

A quick question – have you changed the way the scores are calculated in the speak mode? Since the introduction of Speak Lite, quite often I found the entire sentence which I had just spoken (and recorded) glows in green without any markups (, which suggests the sentence was pronounced perfectly) but still somehow I do not get the full score. As there are no markups to indicate where I have made mistakes, I am left without any clue how to improve my pronunciation further to reach the full score. Can you clarify this? Or am I just missing something?

This does seem like a bug and I will send this on to our speech team so they can look into this.

I’ll also note that yes we have changed some, how we assess learner/student speech but this shouldn’t be what is causing the problem you describe. We are now comparing speech not just on the phoneme level but on the triphone level – the thousands of sounds possible when we blend speech. This gives a much more accurate assessment. Further, we compare not just against a native speaking model but top speakers from the same mother language. This is more accurate in that the world is judging a speakers fluency not just against a standard British or American model these days but mostly against a very fluent second language speaker (for example Koreans might compare against someone like Ban Ki Moon).

Thank you for your reply. I’ve been observing more carefully how the player is performing for the last few days, and I also noticed that quite often you get different scores from the same sentence even when you make exactly same mistake(s).

For example, I pronounce and record a single sentence like ten times – and three out of the ten, I pronounced the sentence perfectly except for the word “and.” Not just any “and” in the sentence, but exactly the same “and” at the same position within the sentence. Each time I checked to see what part of “and” I had pronounced wrongly and it turned out I did not pronounce “a” in “and” correctly. But basically I made exactly identical mistake every time- at least I did according to the feedback from the player. What annoys me, however, is that the player gave me totally different scores each of these three times, like first 82, then 89, and 86. I was like, “what’s going on??”

Probably this problem comes from the same cause as the problem I described in my earlier comment. It doesn’t matter what score the player gives. However the fact the player gives out different scores when you pronounced a sentence exactly the same is simply so much frustrating to the point I imagine that somebody might start questioning the reliability of the entire scoring system.

Having said that, I like the new player in general and appreciate the fact your team is trying to improve it constantly. Keep up a good work!!

This is good feedback and our speech team really appreciates it. It is a continual process to refine our feedback and speech engine – both in the accuracy but also how well we communicate it to the student. We are really making great improvements through 2 things – 1. The massive amount of second language speaker data we have accumulated over 2 years – over 100 million lines (we’ll be making an announcement soon). This data of second language speakers will allow us to refine, as has been done by great companies like Nuance and their Dragon Naturally Speaking which was perfected through data from 1st language speakers. We ‘ve recently aligned our speech engine more towards the L1 of the speaker so feedback is given based on the speakers L1 and comparing their speech to fluent/other speakers of the same L1 group. This could be something effecting your feedback.

2. Speech analysis at the triphone level. We no longer just compare each phoneme the speaker utters but actually analyze at the triphone level, allophones blended together. This is being refined but will give more precision. So this “deeper” analysis could also be some effecting your feedback.

There are other things that could factor into what you experienced. I will send this along to our speech scientists for their comments/investigation. Yep, the job is far from over but we are making progress!