Please keep in mind that these minutes are mostly a rough transcript of what was said at the meeting, rather than a source of authoritative information. Consider referring to the presentation slides, blog posts, press releases and other official material

Runa:
welcome
last review was after Wikimania
this one includes our last release from Jan 15, so some overlap with Q3

slide 3

[slide 3]

7 people across the globe, currently nobody in SF
since Alolita left, I have been taking care of administrative stuff, Amir took interim responsibility for Product

slide 4

[slide 4]

Oct-Dec 2014: Summary
this is all regarding Content Translation (CX)
2 of these 10 languages can only be used as source
Kartik worked with outside partners (Apertium, Debian)
began with Spanish/Catalan as first pair
developed "graduation plan" to evaluate support for languages
lead to list of 15 language pairs, 5 new languages supported, 8 as beta
3rd goal refers to master design document maintained by Pau
right now only minimum viable solution for editing links

slide 5

[slide 5]

"non-collaborative" only for the logged in user
respond to requests within 7-10 days
CLDR update: particularly important for Russian[?] and some other languages

slide 6

[slide 6]

Metrics
(did not have baseline for other language pairs)
Lila: what percentage of all articles published during that time?
Amir: about 60 new articles per day overall, so it's not a large ratio
Erik: how we arrived at these metrics:
wanted limited release with sufficiently many users having tested it, and continued use
this is not meant to be enough for long-term investment
but sufficient to give input for development
still opt-in on Catalan Wikipedia, one has to activate it in Beta features, not easy to find
Lila: how many use it again?
Santosh: 65 users opted in
Amir: 19 users actually published articles
about 30% published more than one
Lila: that's the more relevant metrics: user is satisfied so that they use tool again
once that stickiness measure is satisfactory, then you can drive more people to the tool, that's not the problem
Erik: also looked at articles published during Beta labs stage
Amir: formerly just 2-3 power users, but now more
Erik: important that we don't have target drift [i.e. change/add goals mid-quarter], but hear what you're saying about stickiness
Lila: yes, but should have stickiness goal next quarter
Toby: how many articles are currently translated?
Amir: have an OPW student who investigates that

Lila: these are great measures of success, but should report as percentage hit (e.g. "70% of bugs reponded within 24h" for "response within 24h goal")
Runa: see earlier "Metrics" slide
Lila: I know you are focused on CX, but want team to think broadly about translations
if this tool is successful, want funnel into it
e.g. translate article summaries...
ideas about how to spur user engagement about translation tasks
Pau: e.g. if someone adds to the original article, could suggest translating that part, even on mobile
Toby: talked to Grantmaking how they could integrate CX into their programs
Lila: try a few things every q, bring up those that work
think of this as part of contribution funnel
Runa: OK
want to include other machine translation services
and options to improve the translation process even where machine translation is not available
Lila: pluggable architecture? yes

slide 8

[slide 8]

Runa: want realistic plan for supporting non-CX areas
collect data on what high impact areas might be by user numbers
this might include some resourcing requests
Lila: sounds good - listen to the users ;)
Runa: remaining goals are about other features related to CX, can discuss them offline

slide 9

[slide 9]

Damon: (will look at asks from my team's perspective)
[post-meeting notes on Asks:]
Inter-dependencies with other teams. Call outs:
1. Product - Interim arrangement affects quick decision making in some cases.
2. Analytics - Marked in red to represent the nascent phase of collaboration between the two teams. This is rapidly moving forward and we would like to continue integrating data driven metrics with the guidance from Analytics team
3. For the remaining teams we have steady interactions which can be spiralled up when required.