Parsoid team is responsible for removing the network-heavy metadata (data-parsoid) from rendered output, use metadata storage to store it and maintain a map of element ids to parsoid-specific metadata. This metadata is required for accurate serialization after edits and not for regular page views.

This has the side-effect of eliminating cache/storage fragmentation. Logged-in page views will still require front-end JS to fetch information about red-links, user-preferences, etc. and updating the view (this could be done by the Parsoid team or services / platform teams).

Service team has high-level goal of building the infrastructure for this (Rashomon, API with redlinks etc).

Requires parser tests to be tidy enabled.

Will provide better insight into rendering differences on wikipedias and most wikis where tidy is almost always enabled.

Requires more QA on rendering accuracy (visual diffing).

Will provide better insight into (in)compabilities with current rendering and scale of work.

This is more of a laundry list of tasks not all of which show up in the earlier sections. This can be fleshed out more and also used to figure out how much time / resources we will spend on these tasks. This need not be part of the final roadmap, but should be there somewhere for us to have an overview of all that needs to get done. This could even be folded into the previous section, if need be.

Content-model constraints: Even if well-formed, you cannot transclude A-links inside another A-link. Basically, the overriding concern is: what is required to simply "drop-in" the DOM output of a transclusion into a DOM-tree? One way is to enforce constraints on what a template can produce in all possible expansions for all possible inputs ("static typing" and automatic type coercion). (See Parsoid/DOM_notes)

LintTrap/WikiLint: GsoC project

Support for authorship maps

Requires stable element ids

Editing support:

Support for switching between HTML/Wikitext in the editor. Naive thing is not too difficult to support, but will not be very performant likely. To be investigated.

Support for HTML editing of transclusion parameters (in progress).

Possibly support content widgets for common tasks (for which a combination of tpls are currently used; infoboxes, football tables, discographies, etc.)

Selser-testing is still pretty painful. As selser is getting more refined, and as our accuracy in general improves, it is getting harder and harder to trust both "green"/"red" results from parser test runs. We may need to consider more controlled edit generation where we can construct an oracle to give us authoratitive edited wikitext to compare selser against.