To create a user experience that's not only engaging but delightful, we will need to continually modernize MediaWiki, Wikimedia's technical infrastructure, and our development processes. This work will certainly not end in July 2013, but we anticipate that we'll be able to reach several important milestones.

Our vision for July 2013 is centered on increasing engagement and reducing barriers:

We will modernize the editor. We'll have a fast, scalable visual editor which can perform (single-user) visual editing of any document in the Wikimedia corpus.[1] It will be available to all users on demand. It will be ready to be the new default editing environment, or close to ready. This Visual Editor will be available on the majority of Wikimedia projects, and will ideally be the first user interface you see as a new user.

We will connect with the user. We will implement the foundations of a modern notifications system which will be at the center of a user's experience of Wikimedia as a living, breathing community. Whether you're receiving new messages, noticing updates to your watchlist, looking for things to do; whether you're logged out or logged in, on a desktop or on your phone, you'll be able to get updates about what's going on. This is a major architectural undertaking which will support future efforts to improve messaging, task management, project affiliation, mobile contribution methods, and more. It's a modernization project that will help bring Wikimedia's user experience to the level of other highly engaging sites and services. See Echo (Notifications).

We will increase site responsiveness. When we keep users waiting for a result of their actions, we are treating their time as having no value. This is unacceptable. Wait times reduce the productivity of experienced editors, and likely have a negative impact on the retention of new contributors as well. While we've always disabled functionality that has had a negative performance impact, more systematic profiling and key improvements to slow operations (like the parsing of complex pages) will help us achieve measurable impact.

We will engage our mobile audience. We've successfully expanded our mobile reach, and we will continue to do so, especially through programs like Wikipedia Zero. The bigger challenge on mobile now is to begin growing a community of users who contribute through their smartphone and their tablet. This will include photo contribution, and we'll also want to experiment with simple editing tasks, microtasks like content curation, and improved contribution user interfaces specifically for tablets. As part of this process, we will bake mobile support fully into MediaWiki core.

We will make small changes that have big impact. An entire team will be dedicated to running small experiments and tests focused on the new editor experience: to learn exactly which software or process changes are likely to increase the engagement of new editors who able to make valuable contributions. This will help inform our product priorities going forward, and where changes are obvious wins and easy to integrate, we will make them.

We will recognize rich media contributors as first class citizens. MediaWiki was built as a text collaboration platform, with support for various media types bolted onto it over time. While tools like the new Upload Wizard have greatly helped simplify media contributions, there is still a lot of work to do to make the experience of adding a picture or video as seamless as the experience of editing text – and to provide tools for the community to manage quality and metadata.

We will create a language-aware user experience. MediaWiki's internationalization team has made tremendous progress in eliminating barriers to participation, especially for Indic languages. But in its current implementation, language merely exists in the form of a set of disconnected user preferences. We don't recognize the user as, say, a Malayalam speaker; we don't provide a coherent user interface for changing content language, chrome, font, input method, etc. And translation of important information for our users is still a very process-intensive task which does not yet feel like a fully natural part of the experience. Beyond further improving language support, the user experience will be at the center of continuing internationalization efforts.

The foundation of our work is our commitment to sustain and protect Wikimedia's core operations. Half a billion people rely on our projects every month. Given the transformative software changes we're making, including the increasing operational complexity involved, this will indeed be a significant challenge beyond simply adding capacity. We will need to re-architect services, data-center operations, and service management as part of our responsibility to maximize uptime, ensure proper backups and publicly available data dumps, and report service availability and performance.

This ambitious plan requires that we continue to improve the way we work:

We need to partner with the community in complex feature projects. Whenever we undertake complex projects, our goal should be to make visible progress as quickly as possible, and seek active involvement of volunteers in the process of requirements analysis, design, testing, continued development and maintenance, in short: in the full cycle of work involved in creating a complex project. Work with the community, accordingly, is a full cycle engagement by many individuals with many different skillsets. In the next year, we'll build out a whole new function in engineering: community-driven software testing. We will also continue to improve our approaches to technical communications, outreach, translation, etc.

We have to get serious about big data. We've already begun modernizing our analytics infrastructure, and this process will continue. Our goal is to be able to collect and process vast amounts of real-time data about Wikimedia projects' usage, and to be able to understand patterns in that data at any level. This entails building out a whole new architecture for data collection and processing, and a new flexible dashboard for visualizing key metrics.

We need to give the community tools to innovate. The continued development of and support for the Wikimedia Labs project, and continued improvement to on-wiki development capabilities like gadgets and page-embedded JavaScript, will help create the foundation for true innovation with low barriers to entry. In addition to building out new services like toolserver-style database replication, we will need to create better authentication/authorization models (like OpenID/OAuth) which will enable the development of various tools and applications. Finally, as we've switched our development model to Git, we also have opportunities to create technical interfaces with large existing development communities on sites like GitHub and Gitorious.

We can't afford review bottlenecks. Whether it's feedback on features or contributions of code, the process of triaging, responding to, or acting upon any kind of community contribution should be an open one – and we need to actively work to grow communities of volunteer responders, liaisons, code reviewers, and so on. If we become the bottleneck, we will always fall behind, no matter how responsive we aim to be.

We must reduce our technical debt. Our complex legacy codebase continues to weigh heavily on us, and slows down any efforts to transform the user experience. Increasing test coverage, implementing test automation tools, eliminating cruft, and continually deploying code will be essential to increasing the pace of development.[2] The increasing adoption of agile methodology within the Wikimedia engineering organization, and improvements to team culture and team collaboration, are equally important.

Our plan, if resourced, will certainly take us closer to solving the key challenges the Wikimedia Foundation is facing today. Specifically, if we want to increase the retention of new users and the engagement of our community, none of the activities described below are optional – they can merely be deferred. But even if we undertake all these programs, many frontiers will remain, and each new development will unlock new possibilities and opportunities. Our reach, in other words, will continue to exceed our grasp. We can take comfort in the fact that while many of our challenges are not ours alone, our endeavor is unique and changing the world for the better.

The current (2011/2012) site uptime goal for en.wikipedia.org is 99.85%. To-date, we are at about 99.97%. The Goal for 2012/2013 is more comprehensive: it includes availability goals for the nine Wikimedia project domains, for both ‘read’ and ‘write’. It also takes into account that there would be more MediaWiki releases, thus higher chance of introducing bugs and downtime to the platform.

While Ashburn data center is already serving the majority of Wikimedia projects readers (through our caching layer) and over 80% of network bandwidth, it is still not the primary data center. All edits are still going through Tampa data center. We will make our Ashburn data center the primary site.

The ping response times (ms) from various cities to en.wikipedia.org are listed below. As the numbers show, because our data centers are in Tampa, Ashburn and Amsterdam, the response times are longer for places further away from the data centers. We could shave off a substantial amount of latency, improving users experience when we have a caching center closer to them. Today, an average page makes a minimum of 3 round trips to the closest data center to complete what is required to gather the page for the user. Right now, a user in Hong Kong spends half a second just to start downloading the page, on top of rendering latency. If we successfully brought up a West Coast caching center, that would remove at least 25% of the total page download time. If we bring up an Asian caching center, it would reduce it by about 90%. This effect is multiplied for https (secure) calls, as well as for subpar network connections, such as mobile or those often found in the Global South. Google found that an extra 500 ms of latency dropped traffic by 20% in 2006.[3] Amazon also found major effects in latency with their sales. By not having an Asian caching center we are increasing latency for our APAC users by 600-1000 ms per page load. A west coast caching center would reduce latency by 210-350 ms per page load for APAC and west coast users. For a first caching center, due to network interconnectivity, a West Coast caching center would give us the biggest impact versus cost.

City

Min (ms)

Avg

Max

New York

10.7

11

11.1

Amsterdam

1.4

2.6

10.2

Vancouver

224.3

224.6

224.8

Singapore

263

276

300

Mumbai

223.1

227.3

235.7

Hong Kong

223

224

226

San Francisco

71.9

72.3

72.8

We will actively seek ways to lower our operating expenses such as renegotiating better contracts and exploring donation opportunities.

Most of the interdependencies are internal to engineering. New services are typically built out in partnership with development teams, and utilize Wikimedia Labs as a staging platform. The site performance team helps provide monitoring and advice to ensure all services operate efficiently (see for instance response times goals, mentioned also in #Site performance).

As of September 2012, the Labs infrastructure has the following statistics:

Number of projects:

126

Number of instances:

235

Amount of RAM in use (in MBs):

646,144

Amount of allocated storage (in GBs):

10,020

Number of virtual CPUs in use:

349

Number of users:

624

Tool Labs will expand labs with capabilities currently offered by the Wikimedia Toolserver cluster (which is hosted by Wikimedia Germany). This includes database replication from the live sites. It will address data privacy concerns, come with better hardware, and will be built on the same open source stack as the rest of Labs, enabling a straightforward transition path from tools to MediaWiki development.

To the extent that Wikimedia's projects are seeing declining or stagnating participation, our goal is to arrest those trends and return to positive growth in high quality contributions. The key activities that will support this goal are undertaken by multiple teams: the editor engagement features team, the editor engagement experimentation team, the visual editor team, the multimedia contribution team, and the mobile contribution team. We also see site performance as strongly linked to editor engagement, but are listing it separately below.

The notifications system ("Project Echo") seeks to unify the delivery of interaction messages in MediaWiki core in a common API in a manner that can be extended for performance and scalability and provide a uniform interface for the user managing their notifications.

The messaging system seeks to introduce a public user-to-user messaging system that incorporates modern concepts such as conversations, notifications, and UI that matches user expectations. It will not yet aim to replace all talk pages, but instead focus on user-to-user interactions.

Currently interactions on the site are handled in an ad hoc manner. For example:

User edits a page, if it is in someone's watchlist, the watchlist is updated (which may also generate an e-mail notification).

If a user wants to initiate direct communication, the user writes on the recipient's user talk page. An in-wiki indicator is set ("You have new messages").

Fundamentally, any approach to dealing with Editor Engagement is about an interaction between two users either directly (messaging) or mediated by an object on the Wiki (Article Page, for instance). The essential key to close the communication loop in editor engagement interactions are some sort of real-time notifications about what's happening. We believe notifications features will help increase retention of editors who have already decided to make their first edit on Wikipedia. We know from speaking with our existing editors that one of the reasons they come back is because something is "always happening on Wikipedia": someone expands an article they just created, someone sends them a barnstar, someone writes on their talk page. Actions such as these draw a user back to Wikipedia. Oftentimes, new users don't even know that these actions have even taken place. By notifying users in a visible, user-friendly way that these actions have happened, we hope to draw new editors back to Wikipedia and increase retention.

Currently, notifications are handled in a spaghetti manner that is created ad hoc, and can't be extended. This means for each notification or message in the system needs to separately solve each of the following goals, that should have a unified UI and architecture:

How the user manages their preference on receiving notifications

How a notification is delivered to the user for multiple possible endpoints. Right now there is one possible endpoint for each notification and only the (first) two endpoints currently exist for all of them. For example:

Use a real-time web sockets to push a notification while user is idle?

Show status on mobile web skin?

Push a notification as a mobile push?

Push a notification over SMS?

Push over IM services or IRC?

Push to a third party service or bot?

How the notification list is accessed in the UI, for multiple types of UI (i.e. mobile)

The message actually sent as a function of the constraints of the UI (i.e. mobile push and SMS notifications have length restrictions)

Each interaction combination of the above has a scalability consideration

Each interaction combination of the above has a performance expectation (asynchronous: publisher of notification does not expect to be blocked)

3rd party support in the case when integrated in MediaWiki core (esp. as asynchronous is not easy to support in PHP)

How these notifications might be bundled ("Page has been updated 15x" instead of receiving 15 separate notifications)

Communications between new editors and experienced editors are an important part of the "onboarding" process for new editors. New editors learn the ins and outs of editing Wikipedia through interactions with experienced editors, largely through the existing user talk system. The existing messaging dynamic (current user talk pages) is not based on any existing affordance that is found outside the MediaWiki space and has many problems. Among the issues:

there are unclear social conventions of how to respond

there is no single canonical place for this interaction to live

the concept of a conversation is vague (both from a user and technical perspective), notifications (see above) to all parties are crude/non-existent.

the UI does not represent anything resembling a modern discussion system (does not resemble any existing web affordance).

there is no consistent or relatable framework for surfacing a new messaging event (update) to the user (in the form of a notification)

As messaging actually covers a wealth of user-user interactions and requires a notification system to close the loop. The primary focus in the next fiscal year is only on a single user-user "talk page"-like discussion, spec'd out in a "mobile-first" design manner, to create realistic constraints on the complexity of the problem.[4]

The relationship between a new user-to-user messaging system and the existing talk page system is still to be determined. Ideally, we will have one system that works for all of our user groups. We realize, however, that our experienced users rely heavily on the existing user talk page system. We will need to evaluate the various options (e.g., the addition development effort required for a backwards compatible system, the increased complexity and confusion of having two systems run in parallel, transitioning to a new system while having an accessible archive of existing discussions, etc.) before making a determination.

This team conducts both features work and the support/maintenance work identified below, with percentages worked out based on specific projects' needs and priorities:

Brandon Harris

Fabrice Florin

Dario Taraborelli (shared with experimentation team)

Ryan Kaldari

Benny Situ

Matthias Mullie

Luke Welling

Vibha Bamba (shared with mobile team)

2011-12 hire:SDE Frontend

Very low availability (10 hours/week) but limited support:

Andrew Garrett

Timo Tijhof

For the notifications project, we'd ideally also like to partner with Wikia, who have built their own lightweight notification system which might serve as the model for the shallow (MediaWiki core) implementation.

Most modern websites (Facebook, Google+, Twitter) deal with both the notification and the messaging problem by setting up a performance and scalable message-passing queueing system under a common architecture. Such an architecture can be built, if hooks and interfaces are standardized in MediaWiki core in a manner that allows graceful fallback for 3rd-party install.

In addition to the above, messaging has a backing data store and management requirement that is different from the exisiting "wikipage" paradigm as that is both too fully featured (entails requirements like being able to edit other people's "messages", revision history, and the ability to change discussion error), and not featured enough (convention enforces a "threaded" discussion, no notification of delivery, lives in only one place without indication to at least automatic indication to all participants of a new message).

Notifications involves standardizing both the message types (an extensible architecture for adding more message types) as well as supporting existing and future endpoints (* means currently in MediaWiki):

Features developed by the Editor Engagement team will require maintenance (e.g. analysis, bug fixing, enhancements, etc.). There also may be features from the EEE team which will require productization.

Much of the rationale for Editor Engagement Features teams is applicable to this team. The difference between the two teams is how they address the problem of editor decline. While the former team (EE) will focus on medium- to long-term foundational projects, the EEE team will undertake deep community, product, and policy thinking, testing different ways to approach editor retention.

Over the past year, we have learned a lot about editor behavior by looking to Wikimedia’s raw data in new ways. Based upon ideas extracted from that data, we began experimental projects such as A/B testing of user warning templates to evaluate our assumptions about user behavior as related to editor retention.

We see real value in continuing to experiment with quick and dirty ideas that may help us reverse this decline in Wikipedia participation. As such, we have formed a new cross-functional team tasked specifically with conducting these small, rapid experiments. Our goal is conduct many different types of experiments -- maybe just simple tweaks, maybe ideas that should become fully-fledged new features -- and then feed the features, ideas, and changes that show the most potential impact into the product pipeline (e.g. user account creation process overhaul), into the community (e.g. supporting policy changes) or even having the experiments team quickly implement adjustments if that’s the quickest path to positive change (e.g. changing content of user warning templates).

Some experiments may be technically focused, some community-focused, and all will have a strong measurement component that includes explicit recommendations -- e.g. “Experiment X shows no impact on editor retention, and may even have a negative impact, and should not be considered for inclusion in the product” or “Experiment Y approach shows a potential 2x impact on retention of editors in the 50-500 edit range and should be included in the roadmap for ABC features.”

The backlog for this team is currently being built out. A draft spreadsheet listing and ranking potential experiments is available here. Note this is not yet ready for consumption outside of WMF. Experiments will be listed onwiki as we undertake them.

Impact assessment of conducted experiments will occur on an ongoing basis. Results -- both positive and negative -- will be communicated to other product teams, foundation staff, and the community as experiments are completed.

The team will employ agile methodologies to sprint on the experiments we choose to run.

Experiments may be product or community focused, e.g. policy test vs. A/B test of a feature.

Should an experiment show promise as a feature that can be integrated, the team will coordinate with other Product and Engineering and Analytic team members as appropriate to get the new feature into the roadmap as soon as is reasonable.

Analytics: Most, if not all, of the E3 features will require measurement that existing functionality (e.g., existing click-tracking extension) does not cover. Work closely with Analytics team to ensure that our requirements are covered in their build-out plans.

Hiring: this new team will not be able to code new experimental features without engineering or UI/design staff.

Community: The new team will continue to rely on community input and community contacts to socialize product ideas. We will also continue holding editor research meet-ups to focus-group test ideas and/or gather new ideas for experimentation.

Since the budget submission, we've decided to defer the launch of the multimedia team until higher priority existing teams are fully resourced. This will affect the timeline below, which has not been updated yet to reflect this fact.

The proposed Multimedia team will build features that will enable easier contribution of multimedia content to Wikimedia projects. Specifically, the following areas will be addressed:

Improve curation and feedback tools to manage new and existing contribution streams

Enable multimedia contributions in a more user-friendly and seamless manner

Improve display of multimedia content

The red line represents indexed growth of the number of Wikimedia Commons contributors since February 2009, compared with other top projects. As can be seen, Wikimedia Commons is the strongest performing project in terms of relative contributor growth.

At the present time, the number of Commons contributors is one of the few editor engagement metrics that are increasing. Over the past year, Commons has seen 25% year-over-year growth in contributors. The web is also moving towards more visually driven interfaces, so having strong multimedia support helps WMF meet the expectations of modern readers.

In fiscal year 2011-12, we have developed the technology infrastructure that previously was not available to support the storage and use of large amounts of multimedia. This is a necessary precondition for increased investment in contribution tools in the next fiscal year.

When developing new contribution streams (mobile photo uploads, improved integration of uploading into Wikimedia projects, etc.), we have to keep in mind that we are likely to receive a significant amount of low-quality or inappropriate uploads. Accordingly, we want to focus our investment not simply on increasing the inflow of new contributions, but also on improving quality management tools, including simple means for audience feedback.

Mobile will hit us in Q1 with their first pilots related to mobile photo upload. Likely we won't see immediate uptake, but if the team is successful in driving mobile media contributions, this will dictate a strong focus on curation/quality control tools early on -- a good problem to have.

Will drive a lot of storage load on Site Operations, and we may need to build out a larger job queuing and transcoding processing infrastructure as TimedMediaHandler goes online, video on commons grows drastically, or the community comes to a decision on policy supporting (or not) licensed encumbered codecs (i.e. H.264 output for mobile devices older than iOS5 and Android ICS).

To solve this problem a working two-way parser is needed. In the current platform implementation, wikitext is converted directly to output but there is no association of the output to the underlying wikitext. Part of the Visual Editor involves a parser that translates wikitext into a working two-way data model, and an API that provides reliable client-server interaction on this model.

Board feels that editor retention is the most important focus for the Foundation.

How the visual editor addresses the above problem:

A visual editor removes the two major impediments for new or inexperienced editors, thus increasing the number of Wikimedia contributors:

The technical expertise to deal with wiki markup and templating usage is a high hurdle

The UI for editing is terrible

Many key features for Editor Engagement are dependent on a working two-way parser that the project is building. Examples:

An annotation/collaboration (e.g. comments in Google Doc) Talk Page replacement needs access to the underlying wikitext from within the output UI for positioning.

Improvements for directed forms of user involvement (microtasking) can only be hooked into with a user-friendly, context-sensitive UI when the output to underlying markup language is available programmatically

Visual diffs would now be possible and assist new and existing editors with evaluating changes to the Wiki.

Other key features for Editor Engagement are dependent on a working visual editor. Examples:

Messaging would probably occur in abbreviated wikitext and would require a lightweight visual editor.

A VE opens the door to other modes of communication that aren't as heavy with convention as "Talk Pages" such as chat, forums, threaded discussion systems, and annotations.

Recording time-sensitive diffs is possible within the visual editor framework. This could help editors catch things hard to find in a regular diff. (For instance, a single change where an malicious editor moves a block of text from one area to another, and then buries a policy-violating change inside the block become evident when it is played back in time.)

Some obscure wikitext patterns may need to be renormalized and converted to fix the above goal, but the target behavior is to have the parser without the above

Should mark output content so bi-directional roundtripping does not modify the original wikitext

Will hopefully become the canonical description for the underlying wikitext (folded into MediaWiki core)

A working parser allows for two-way interaction between the user interface and the underlying wikitext

Ability to load and save an entire wiki page using the visual editor.

Ability to extend the user interface with user-created gadgets and/or wiki-specific features and extensions (e.g. allowing the citations extension to add UI functionality for also enabling ISBN lookups, i18n, etc within the Visual Editor).

When in production, it may be possible that there is a need for some node.js infrastructure (this is not finalized). Though this is likely to be contained by repurposing existing parser infrastructure with the more efficient parser.

If collaboration feature is added, there might be a need for additional resource infrastructure

For 2012-13, we would like to professionalize our process of performance engineering. In so doing, we plan to appreciably improve the user experience for editors and readers, and more efficiently utilize capacity of existing hardware requisitions.

Specifically, we aim during 2012-13 to do the following:

Define a short list key site performance metrics, including (but not limited to) average uncached article rendering times on a baseline of Featured Articles.

Make substantial improvements in key site metrics, including 75% decrease in article rendering times on Featured Article baseline.[5]

Identify and analyze templates critical to site performance, working with the editor community to address performance issues in those templates.

Provide and promote accessible tools for improving performance of templates.

It is a well-studied phenomenon that even small delays in response time (e.g half of a second) can result in sharp declines in web user retention.[6][7] As a result, popular websites such as Google and Facebook invest heavily in site performance initiatives, and partially as a result, remain popular. Formerly popular sites (such as Friendster) suffered due to lack of attention to these issues[8]. Wikipedia and its sister projects must remain usable and responsive in order for the movement to sustain its mission.

Over the years, we have prioritized the experience of the non-Editor users at the expense of the logged-in one. This made a lot of sense when operational and financial resources were tightly constrained and flagship websites had a similar preference for logged-out experience (news sites, shopping sites, blogging, etc.). Over the last 5+years, most Web 2.0 sites (Facebook, Twitter, etc.) would count only the logged-in Editor as a "user" and have optimized their experience to be more participatory.

During those same five years, our priority has created an experience which could be termed actively hostile to our editors and made it difficult to impossible for building participatory and engagement features build on Web 2.0 affordances. While a reader would be receiving content instantly from the caching layer, a logged-in user will need to receive that same content directly from an architecture not designed to support them. Similarly, features development is hampered by having to design-in UI, software architecture, caching, and operational considerations that are taken for granted on nearly any other web site.

For instance, currently, complicated and popular articles (e.g. "Barack Obama") often take 30 or more seconds to render when the cache is invalidated for an article (e.g. when the article is edited, or when an included template is edited). While article rendering is possibly an extreme example, we have several other pockets of our systems that have similar problems.

Our volunteer and paid developers have few tools to understand how their work impacts performance (for good or for bad). Furthermore, even editors have an impact on performance, and they have few tools to understand what that impact is.

This problem is compounded in that architectural design (both in software and operationally) was never considered as a whole to loosely couple these parts for scalability—each new feature added to the site has a corresponding slowdown in overall site performance, increase in operational requirement (new hardware), and corresponding operational complexity.

As one example, External Storage solves the data storage problem by partitioning the data storage for page text and associated information vertically (a dedicated machine group for each problem) and horizontally in time (older data lives on different machines than newer ones). This means the load/reliability characteristics are not even across machines within the partition (processing power on machines holding older articles sits almost completely unutilized), do not resemble other similar machines in other partitions (usage of page text and usage of user tables are different), and create bottlenecks in the system (all requests for a particular piece of information must go to particular machines). The net result is while this approach solves the problem, it treats each machine as a unique snowflake, making it increasingly more difficult (and more expensive) to support existing features or add to them.

Now imagine a situation where each new feature that targets the Editor Engagement problem runs across this end-to-end operational problem and unique performance consideration every time before a deploy with no way of knowing its impact (until it has a serious detrimental one). Moreover (until recently) there was no insight that any change had created that impact, and (still) there is no responsibility that couples the developer or editor's change to that impact, so that they could learn and improve the design through diligent practice. The loop is essentially unclosed on architectural and programming practice at the Foundation.

We need to invest in tools that make it possible for developers and editors to know what impact they are having, both so that we can accurately assess when a feature is creating a performance problem for us, and so that we can better understand the impact of our investment in performance. That will give us the visibility we need to address the most important issues, instead of relying on gut feel and lore to decide what is "good" or "bad" for site responsiveness. This will allow us to focus our effort on the most meaningful of improvements. And of course, we need to use this information to improve site performance.

The metrics we establish will help clarify our goals. We're already well aware of our worst performance area: article rendering for uncached articles. Since performance issues tend to be most pronounced on Featured Articles (which typically have lots of references and advanced use of templates, and are by definition the type of article we want more of), we plan to establish a cohort of Featured Articles to use as a baseline to measure rendering performance.[5] We believe we can substantially improve performance in this area in the coming year. However, we don't want to focus exclusively on article rendering time, so we plan to establish other important metrics we will track and improve on.

We made some progress on performance in 2011-12. Asher Feldman deployed many performance measurement tools such as Graphite which have already helped us spot regressions[9]. Tim Starling finished our disk-backed object cache project, which by one measure decreased average response time by 80-100 milliseconds. Tim also intends to make significant progress on introducing Lua as a new, faster template language alternative. However, neither Asher nor Tim have had the ability to focus sufficiently on performance to make the kind of progress that we need to make, since both play critical roles in the day-to-day operation of the website.

More informed template authors with the tools necessary to avoid performance pitfalls and ability to keep page rendering time at acceptable levels.

More informed developers with the tools necessary to avoid performance pitfalls.

Reduce ping times from all worldwide locations to <150ms according to Watchmouse and site24x7.com. Reduce ping times to all European and North American locations to <80ms according to the above resources.

Upload is our first contribution feature. It will be completed for an experimental community trial by July. At that point we'll shift to experimenting with ways to engage casual users in multimedia contributions. This will interface and depend on the multimedia team that's proposed in the plan. The multimedia team will develop desktop focused curation tool, while the mobile team will develop mobile focused curation tools for multimedia.

The mobile focused curation tools may be our first microtask experiment on mobile devices. If we find that mobile multimedia contributions are not taking off, we will likely shift contribution efforts to other initiatives.

With regard to mobile editing, we are not making any assumptions about what forms of editing are likely to be used. However, all those assumptions require baseline support within the mobile infrastructure for text parsing and text manipulation. This is an infrastructure project that will likely not pay off immediately. We can target early text contribution efforts, such as block-level editing and new page creation, but ultimately our priorities will depend on where we see productive user adoption.

For microtasks, the mobile team will likely need to interface closely with the experimentation team, which already has a list of microtasks it would like to try, but will bias towards desktop experiments if we do not have proper orientation towards the mobile UI and APIs.

Our mobile page growth continues to be 5-15% every month but these users can't contribute. In order to reverse the trend of editor decline we need to capture new users coming online primarily (and sometimes only) on mobile devices

Mobile will finally provide its 2 billion pages views a month with a simple and easy set of contributory pipelines. Since mobile is seeing the biggest rise in readership it only makes sense to start funneling those users into contributors. This can immensly solve our editor retention problem.

Data charges and technological barriers should not impede access to our projects. There are easy ways that we can reach a significant amount of people if we are innovative with how they can access Wikipedia (through "Wikipedia Zero" partnerships, SMS/USSD access, S40 J2ME access, etc.).

Our mobile projects have been extremely successfull thus far but can't continue to scale at our growth rate unless we better integrate them into the core of MediaWiki, simplify our data sets, build a stronger API, and get better analytics. We have to merge the common functions of MobileFrontend into core so that we can work with non mobile developers to develop mobile solutions for our organizational reach goals.

To broaden the reach of the Wikimedia projects including reader and editor engagement by developing Mediawiki language support tools (i18n), developing translation tools for Wikimedia L10N communities, user testing with language community, collecting feedback, collaboratively working with other open source projects to grow our language support and developer groups.

Wikimedia projects support 284 languages today with a community of translators and some tools. The i18n engineering team is developing tools for input methods, output methods, search, translations, supporting mobile and editor engagement tools such as visual editor; and is evangelizing usage of these tools in various language communities.

The Wiki Movement has a chronic need for analytics. We need it to understand our editors, to encourage growth, to engender diversity, to focus our resources, to improve our engineering efforts, and to measure our success. It permeates nearly all our goals, yet our current analytics capabilities are underdeveloped: we lack infrastructure to capture editor, visitor, clickstream, and device data in a way that is easily accessible; our efforts are distributed among different departments; our data is fragmented over different systems and databases; our tools are ad-hoc.

Rather than merely improve existing jobs and data pipelines, the Analytics Team aims to construct a Data Services Platform capable of mining intelligence from all datastreams of interest, providing this insight in real time, and exposing it via an API to power applications, mash up into websites, and stream to devices.

The fundraising tech team works closely with the fundraising production and fundraising creative teams that are part of the Community Department. The team also works with LCA to ensure compliance with various privacy policies, execution of contracts with payment providers, as well as community support.

Any security engineer that we hire will work closely with the FR-tech team to ensure that our systems are secure.

Analytics - There is currently nobody dedicated to fundraiser analytics.

The MediaWiki software at the heart of our software infrastructure needs continuous modernization in order to support our ambitious initiatives and in order to improve site stability.

In 2012-13, there are core technologies need to support new innovation. We need support for new MediaWiki revision types and some level of data transclusion to support Wikimedia Deutschland's Wikidata effort. A number of parts of the core software will need to be reworked in order to support flexible methods of user notifications. OAuth will allow users the ability to securely grant new tools the ability to take actions on their behalf (such as transfer images from other websites), without needing to share their password with anyone.

Our software also needs many improvements to increase our operational efficiency and stabilize our infrastructure. The way we configure our software (global variables) hasn't changed since the very early days of the project, despite enormous problems in maintainabilty, ability to test components, and ability to flexibly configure our systems. We require shell access (and often staff time) to configure many things that site admins should be empowered to change. We need to more fully support our new ability to serve from multiple datacenters, by making it possible to seamlessly switch between data centers without noticable glitches (such as loss of session data). We need to continue to improve MediaWiki's ability to handle different storage techniques such as Swift, so that we can expand our media storage, remove upload limits, and prevent the imminent exhaustion of our existing media storage. Our search infrastructure also needs improvement.

In 2010-11, we only managed one deployment of MediaWiki (1.17) to the cluster, and that deployment took a couple of attempts. In 2011-12, we implemented Heterogeneous Deploy support in our tools, which let us roll out 1.18 and 1.19 in a gradual way. We also switched our version control system from Subversion to Git, and switched our review process to pre-commit reviews. As of this writing, we haven't yet deployed out of Git, but we're already having serious conversations about a bi-weekly deployment process. We also carved out 20% of our overall development capacity for code review and general community-reported bugfixing.

In 2012-13, we need to continue to invest in this area. While we believe that Git represents a net positive for us, there have been significant regressions in productivity from Subversion (such as code review tooling), which need to be addressed. While we made a modest increase in our deployment frequency in the last fiscal year, we get significant value from deploying far more frequently.

We need to do this while also supporting ongoing operational needs of the site and supporting our development and editing communities. We also need to be proactive in training new developers in the best practices for secure software development.

In 2011-12, Wikimedia Foundation established a beginning for quality assurance activities. We hired Chris McMahon as our QA Lead, and brought in Antoine Musso to help with our test automation infrastructure. We also have a Bugmeister who helps prioritize and assign the bugs submitted through our public bug tracker.

We don't believe our current hiring is sufficient. WMF today has 22 full time developers and approximately 20 more part time or contract developers, in addition to a large contingent of volunteer developers. While there is no standard ratio of developers to testers, and that ratio varies widely across the software development landscape, one commonly quoted figure in practice in the industry is three developers to one tester. Unfortunately, we can't afford that luxury, so we're going to have to be very strategic about the hiring we do in this area.

In 2012-13, we plan to dedicate resources to streamlining test automation, rallying community support for test efforts, providing infrastructure for better developer collaboration with testers, and providing burst testing capacity when needed. This all will help ensure that our site remains stable.

This has a significant dependency on our "Wikimedia technical community" efforts to recruit and retain a robust testing community.

We aim to support volunteers and companies/orgs who work on Wikimedia technology, enable them to achieve more with each other and with WMF, and (when possible) align them with Wikimedia movement goals (especially new editor engagement and the visual editor).

Our volunteer encouragement, mentoring, and alignment “funnels” have holes at different various points for operations, documentation/project management, testing/bug reporting/bug triage (QA), and software development activities. We have a steady stream of new software developers interested in joining MediaWiki development and of new system administrators interested in using Wikimedia Labs. However, we are not as strong at finding or coordinating project management or QA activities, at mentoring the new developers, or providing a compelling and usable development environment in Wikimedia Labs that helps sysadmins and developers more easily write, test, and puppetize their changes. We have also identified the strategic weak points in these processes and aim to strengthen them.

Given that, we aim to reduce our emphasis on initial software development outreach and to improve our mentorship of the existing development community, except in partnering with Global Development and local organizations where we have a strong interest in growing the local Wikimedia community (Brazil and India). We instead aim to support staff and volunteers in mentoring developer volunteers who come in via existing intake processes. And we will partner with QA and with Ops to strengthen our volunteering pipelines in those areas.

The imbalance in our technical community leans strongly towards development and lacks systematic testing. While the QA team and MediaWiki platform developers are going to automate what testing can be automated, there is simply no adequate substitute for trained manual testers to find bugs and assure the quality of our service. Since we cannot afford to hire many testers, we are choosing to hire a volunteer QA coordinator to train and lead unpaid testers as a key strategy of our QA efforts.

Currently: Software developers can contribute but sometimes wait for weeks or months before seeing their changes merged; new volunteers don't get aligned to movement priorities.

Goal: Facilitate the intake and growth of volunteer developers and their ability to contribute software, and align them to the movement priorities. All new changesets from volunteers get an initial comment within one week of the merge request.

Ops volunteering

Currently: We have very few volunteers who can lead system administration projects or contribute to ops.

Goal: Many people can puppetize packages, and do so. They don’t need to whine as much to get things done.

Engineering project documentation and product management volunteering

Currently: A few volunteers make occasional edits to update project documentation, or make efforts to write specs, gather feature requirements, or do other product management work.

Goal: There is a team of on-call documenters, and multiple volunteers reliably update documentation about projects they care about. At least one volunteer consistently contributes to product management work.

Goal: a team of on-call MediaWiki documenters who can sprint on specific areas, and up-to-date documentation for the MediaWiki API and for the extensions that Wikimedia Foundation deploys

QA volunteering

Currently: many people find bugs and report bugs, but we have no systematic volunteer testing and nearly no systematic bug triage by volunteers.

Goal: A test squad of good testers whom we can call upon to test at particular moments or on particular components/tools and provide good bug reports, so we can do strategic outreach, and a bug squad of good bug triageurs who continuously triage bugs. Ongoing training to improve testers' and triageurs' skills.

↑A document consists of many components like images, tables, citations, infoboxes, etc. Our goal is to support as many of these components as possible, but at minimum, where we're not able to offer a specialized visual UI, we'll at least make it possible to see the rendered output and manipulate the underlying markup. An example would be a mathematical formula in LaTeX. While ideally such a formula should be editable through a dedicated formula editor, it's very unlikely that we'll have such an editor by July 2013. We will, however, have the API support that will allow any developer to create it.

↑Part of our technical debt that we'll have to continue to carry is the commitment for MediaWiki to be useful to third parties – but we'll need to make hard choices about whether every new feature will be runnable on a standard MediaWiki stack (PHP/MySQL) without complex dependencies like Node.js.

↑By "mobile first" we mean focusing on the experience/purpose of messaging first, thus setting realistic and achievable goals for a messaging system that would run in parallel to (not necessarily a replacement of) existing user-user communication (Talk Pages, Liquid Threads, e-mail). We do not mean targeting the mobile platform first with either notifications or messaging. "Mobile First" is the design paradigm that states: by focusing mobile platforms first, one creates adequate design and development constraints on a project. In this context the consideration is that previous attempts at talk-page replacements (e.g. Liquid Threads) started with the full feature set of the existing user-experience and MediaWiki-specific models, which created too many requirements on the system the the point of near-abandonment and rewrite (Liquid Threads 3.0)

↑ 5.05.1Our current back-of-the-envelope calculation is that we should see a 75% improvement on median response time with a concerted effort in this area. However, the exact goal here is subject to debate and refinement (for example: we may choose to focus on 99th percentile performance rather than median performance).

↑"Wallflower at the Web Party", New York Times, October 15, 2006. Quote: "Kent Lindstrom, now president of Friendster, said the board failed to address technical issues that caused the company’s overwhelmed Web site to become slower."