Crash Data

Security

(public)

User Story

User Agent: Mozilla/5.0 (X11; U; Linux i686; hi-IN; rv:1.9.2.24) Gecko/20111108 Fedora/3.6.24-1.fc14 Firefox/3.6.24
Build ID: 20111108102749
Steps to reproduce:
Hi,
We do translation for Mozilla from several years. We find Mozilla translation process not suitable for the purpose of translators and localizors communities. There are following reasons for the same:
1. Time Consuming: The whole process is very cumbersome and time consuming. The task that can be done through some machine work, we have to do by comparing files in source with our locale. This task itself takes 40% time of a volunteers who give their free time for the Mozilla.
2. Translation Memory and reuse of database: In the whole translation process, mainly dependent on committing files through hg and svn and editing the files through an editor, we am unable to use the translation memory and reuse of the translation we have done earlier. This again takes a volunteers more time than what it can be actually required.
3. Inconsistency in Translation: Due to no use of memory, translation leads to the problem of inconsistency and issue of standardization. This problem leads to poor usability issues. We have conducted review meets and got feedback of inconsistencies.
4. Glossary: Due to unavailability of any translation tool that can edit the files in the dtd and properties format, it is difficult for translators to use approved terminology in the translation of Mozilla product. Again creates usuability.
Regards,
Rajesh and Aman

I agree that the tools for the actual localization work are not cool. You're raising some fair points that we should put on the list to get right for good tools.
My personal list is a good deal longer, though, http://blog.mozilla.org/l10n/2011/12/06/future-of-mozilla-l10n-tools/. I'm pretty sure that the existing toolchains out there don't deliver the experience that's worth changing for.
Some technical nits: The word "process" is used in a variety of meanings, many folks at least around me will misread that to be the product shipping process. Also, I'm not sure if this is a good topic for a bug, or if this product/component would be. First time I run across a bug here.

Just for the record, here's the current bunch of available tools: https://wiki.mozilla.org/L10n:Tools
Now what I understand, there are a few scopes where we can take advantages of machines for frustrating repetitive works. If that's the case, then please note them all down concisely - may be Community Tools will be able to help develop Firefox Addons for most of the text parsing/processing etc. purposes.
CCing Brian Kings here, for the same.

(In reply to Axel Hecht [:Pike] from comment #2)
> Some technical nits: The word "process" is used in a variety of meanings,
> many folks at least around me will misread that to be the product shipping
> process. Also, I'm not sure if this is a good topic for a bug, or if this
> product/component would be. First time I run across a bug here.
Thanks Pike!
I understand that it is issue of Mozilla Communities/Contributor Engagements so I put here. If Mozilla thinks that it should be under different component and different subject line, then it can be changed. I want to track this issue here in bugzilla and want to resolve this issue of community collaboration because it is due from years.

(In reply to Rajesh Ranjan from comment #0)
> 1. Time Consuming: The whole process is very cumbersome and time consuming.
> The task that can be done through some machine work, we have to do by
> comparing files in source with our locale. This task itself takes 40% time
> of a volunteers who give their free time for the Mozilla.
>
> 2. Translation Memory and reuse of database: In the whole translation
> process, mainly dependent on committing files through hg and svn and editing
> the files through an editor, we am unable to use the translation memory and
> reuse of the translation we have done earlier. This again takes a volunteers
> more time than what it can be actually required.
>
> 3. Inconsistency in Translation: Due to no use of memory, translation leads
> to the problem of inconsistency and issue of standardization. This problem
> leads to poor usability issues. We have conducted review meets and got
> feedback of inconsistencies.
>
> 4. Glossary: Due to unavailability of any translation tool that can edit the
> files in the dtd and properties format, it is difficult for translators to
> use approved terminology in the translation of Mozilla product. Again
> creates usuability.
>
Missed to add one important issue:
5. Proofread/Review: In the existing process it is very difficult for us to review our own translation. For review we have to open both file of source and target language separately and then search for similar entities (and to make the issue more complex the entities can be at different position in the files). So one time error can be a permanent error! So user of localised Mozilla product is at great risk.

All these features are more or less present in existing translation tools (online or offline), one of them being the one I develop ( http://a.maimult.ro/lmo/ ).
The outstanding problem is that one size does not fit all and I could list a dozen of reasons why people don't use my tool or other tools. There is a big difference in new localization teams (that prefer ease, speed and online tools) and old teams (that prefer control, quality and mostly offline tools).
The tool needs to be part of the Mozilla build process, so if it's there already, it needs customizing.

I agree, one size fits all just doesn't seem to exist when it comes to l10n. I used to be on Narro but I've been on Locamotion (http://pootle.locamotion.org/gd) for a good while now which seems to fit my personal needs (i.e. it gives me po files which I can use in a TM and someone else does the committing). I like the latter because it reduces the need for me to mess around with lines of code which I barely understand (even 3 years down the line). But again, some folk find don't find that scary and are more scared by translation.
I personally (and this is from a non-technical point of view) if we could devise a way that allows the teams to pick up a format (doesn't have to be po) that can be used in a variety of tools, drop that back off and have that automatically fed into the nightly builds etc would work for a lot of people.
I think that whatever we do, we should consider that our group of localizers is "skewed" and may not represent the views of people overall who would be willing to translate but not localize, if you get my meaning. At the moment, only people who are to some extent competent in all this repo and svn stuff (or who happen to know someone) ever join the l10n process. That means that we're excluding a lot of potential good translators who can translate beautifully but who get cold shivers when reading the technical notes for the process (I'm one of them and without the help of Kevin on the Irish team, there still wouldn't be a Gaelic version).
The point is, when we decide, we must try and get the views of people who are interested in translating (but who aren't at the moment) to find out what would encourage them enough to actually do something. Otherwise we end up with another process developed by techies (I mean that kindly) for techies, with surprised looks at the end that the process only attracts techies. There just aren't too many people who are good at translation AND code.

(In reply to Michael Bauer from comment #7)
> I agree, one size fits all just doesn't seem to exist when it comes to l10n.
"One size fits all" works wonderfully for MediaWiki: It has translations to over 300 languages, over 100 of which are nearly complete and up-to-date, and all the translators use the same tools.
> I personally (and this is from a non-technical point of view) if we could
> devise a way that allows the teams to pick up a format (doesn't have to be
> po) that can be used in a variety of tools, drop that back off and have that
> automatically fed into the nightly builds etc would work for a lot of people.
This is what Mozilla does now and honestly, it's a horrible mess.
It's hard to join a team; there's almost no communication between different language teams; people who use different tools overwrite each other's work time after time; etc.
> The point is, when we decide, we must try and get the views of people who
> are interested in translating (but who aren't at the moment) to find out
> what would encourage them enough to actually do something. Otherwise we end
> up with another process developed by techies (I mean that kindly) for
> techies, with surprised looks at the end that the process only attracts
> techies. There just aren't too many people who are good at translation AND
> code.
The thing that attracts people to translating is seeing their work being accepted and being used by somebody. If a string that i translated ends up being used in the released product in a reasonable amount of time, then i'm a happy translator. In any other case, i'm not a happy translator.
Any program or practice that doesn't have this in its design goals is flawed.

(In reply to Rajesh Ranjan from comment #0)
> User Agent: Mozilla/5.0 (X11; U; Linux i686; hi-IN; rv:1.9.2.24)
> Gecko/20111108 Fedora/3.6.24-1.fc14 Firefox/3.6.24
> Build ID: 20111108102749
>
> Steps to reproduce:
>
> Hi,
>
> We do translation for Mozilla from several years. We find Mozilla
> translation process not suitable for the purpose of translators and
> localizors communities. There are following reasons for the same:
>
> 1. Time Consuming: The whole process is very cumbersome and time consuming.
> The task that can be done through some machine work, we have to do by
> comparing files in source with our locale. This task itself takes 40% time
> of a volunteers who give their free time for the Mozilla.
>
Rajesh, You are wrong.
It takes 20% of time for translation & 80% of time for rest of the process.
And in case of translation bug, it is horrible.
Now it is high time. We should find a way out. Otherwise it will be very much difficult to attract new contributors into localization.

(In reply to Amir Aharoni from comment #8)
> "One size fits all" works wonderfully for MediaWiki: It has translations to
> over 300 languages, over 100 of which are nearly complete and up-to-date,
> and all the translators use the same tools.
You can't compare them. MediaWiki is surely tiny in terms of localized strings compared to Firefox.

The process of translation for fedora, Transifex, Gnome is very simple.I have translated 50k strings just by downloading the files of these projects and uploaded very easily after translation. Recently i have started translating firefox in magahi
and facing lots of problem during translation like downloading the translations and translating it by using editors.It consumes lots of time.When i go through the uploading process i found it very hard by seeing hg ans svn process. I think that mozilla should take some steps to create an interface from where a translator having no knowledge of commands can easily upload it through some GUI and review it there.
We volunteers are spending our free time in translating.If it is possible then it will save our time and help in development.

(In reply to Robert Kaiser (:kairo@mozilla.com) from comment #10)
> (In reply to Amir Aharoni from comment #8)
> > "One size fits all" works wonderfully for MediaWiki: It has translations to
> > over 300 languages, over 100 of which are nearly complete and up-to-date,
> > and all the translators use the same tools.
>
> You can't compare them. MediaWiki is surely tiny in terms of localized
> strings compared to Firefox.
Perhaps a comparision with mammoths like GNOME, KDE or Libreoffice will be more convincing.
Honestly, compared to most other projects that people like me or Rajesh have worked on since the last 7-8 years, Firefox is probably the tiniest and the most painful in terms of the process. No other project expects the translators to keep watching over their shoulders in case someone starts a parallel translation in another tool, duping their work or bypassing the language team completely. Besides the usual complications of manually copying over strings in plain text editors (in most cases) or figuring out a tool to translate the strings, there are the innumerable html pages which do not seem to have any specific schedule besides 'asap'. Till date I have not seen a listing of the various components that constitute a 'complete localized $Mozilla product' or their individual completion roadmap and schedule. For instance, in Fedora we have specific schedules for the main product, web-components, documents and everything else that lands up to the L10n team. Please correct me if I am wrong, but I do not think there is a localization schedule for the current in-devel Firefox 13 either. So at any point of time, the translators are unaware of the components they are expected to be dealing with when they sign up for this.

one additional requirement that needs to be added here is that the process needs to be secure and have variously levels of peer review. That's one big difference between a mediawiki kind of process and anything we need for Firefox. with mediawiki some pages get installed on some websites, and updates to those pages are relatively easy and can be accomplished quickly. With Firefox client software gets installed on 400 million systems around the world and prevention and recovery from any mistakes in that process plays a larger role.
There were a few more things that we talked about as differences between mediawiki and Firefox at the session on this at mozcamp berlin. does anyone have additional notes on that session that could feed into these requirements?
thanks for filing the bug and getting some discussion going, but at some point the requirements for this probably should be moved to a wiki so we can manage and follow the list of characteristics for new tools a bit easier.

(In reply to Amir Aharoni from comment #8)
> > I personally (and this is from a non-technical point of view) if we could
> > devise a way that allows the teams to pick up a format (doesn't have to be
> > po) that can be used in a variety of tools, drop that back off and have that
> > automatically fed into the nightly builds etc would work for a lot of people.
>
> This is what Mozilla does now and honestly, it's a horrible mess.
What I meant is, a format that's less messy than dtd files. I just wanted to stay clear of suggesting po files, I don't have a fixation with them :)
MediaWiki is a really bad example in some ways. Ok, within itself, it works but there's no way I can reuse my (rather large) TM which includes stuff I've done for other project. Now, if I could export from MediaWiki and then import again, that would be nice.
Peer review is useful but for a lot of smaller languages, tricky. 3 years down the line, the amount of feedback I get is still close to zero and any errors I find, I find through daily use. So by all means enable peer review but don't make it obligatory or many small locales will never get to release.

(In reply to chris hofmann from comment #13)
> one additional requirement that needs to be added here is that the process
> needs to be secure and have variously levels of peer review. That's one big
> difference between a mediawiki kind of process and anything we need for
> Firefox. with mediawiki some pages get installed on some websites, and
> updates to those pages are relatively easy and can be accomplished quickly.
> With Firefox client software gets installed on 400 million systems around
> the world and prevention and recovery from any mistakes in that process
> plays a larger role.
>
> There were a few more things that we talked about as differences between
> mediawiki and Firefox at the session on this at mozcamp berlin. does anyone
> have additional notes on that session that could feed into these
> requirements?
>
> thanks for filing the bug and getting some discussion going, but at some
> point the requirements for this probably should be moved to a wiki so we can
> manage and follow the list of characteristics for new tools a bit easier.
Here is a wiki where you all can find more info about our current plans for new L10n tools:
https://wiki.mozilla.org/L10n:Tools_Vision
In addition, I recommend that we move this discussion to the newsgroups, both dev.tools.l10n and dev.l10n.

re: comment 12.
There are a lot of requirements wrapped there. Lets try to pick a part and list them.
* the new system needs a comprehensive list of localization tasks
the new dashboard axel's been working on make some strides here. more about that soon.
* the new system needs schedule information.
we do have a more reliable string freeze and product schedule due to rapid release. there are probably some easy things we can do to pull in schedule information from https://wiki.mozilla.org/Releases which has upcoming dates for repository migrations which correspond to string freeze and localizaion periods on the aurora channel where most teams work. some teams work on mozilla-central and it would be useful to add that information to the team dashboards.
* schedules need to be more predicable and reliable to allow better planning.
not sure this really a requirement for the system, but we can say that
we are pushing for a more comprehensive schedule out of the marketing team for promotional translation stuff.
One question also. Can you link to the dashboard or pages that give schedule info for the GNOME, KDE or Libreoffice and provide this comprehensive view? maybe there are some good examples we can lift from that.

Hi,
I recently started contributing to Mozilla Marathi(mr) and frankly I really did find starting to contribute really really hard. I find that most of the issues discussed here I have faced in my relatively short time as a contributor.
As of now the onus is on the contributor to make sure that he sticks to the standards developed. And even doing this is painful. I had to go through tens of other files as to find what was the accepted translation.
The closest I found to helping me was http://frenchmozilla.com/transvision/
However, it still doesn't solve a lot of the issues of standardization. In fact Pike and I had discussed about these issues on the IRC and had come up with an idea of a tool doing something of the sorts discussed here. This is a link to the proposal that I had submitted to the Google Summer of Code for the tool in Mozilla that we had discussed along with Phillipe. http://goo.gl/546JU
Though the above doesn't solve all the problems raised here, but it atleast is a step in the direction to solve the problems.

I added these items to https://wiki.mozilla.org/L10n:Tools_Vision
* Automation for every reasonable task where automation can be provided
* Quick reference to and reuse of previous localized translation material where ever it is related to current work.
* Links to the translation "style guide" where the localization teams has established rules by which the localization effort has be built upon.
* A clear and comprehensive view of all the translation work related to a Mozilla Project including product, web promotional material, and anything else directly related to the project
* A clear view of proposed schedules, milestones, and progress toward the completion of translation work, testing, and acceptance of a localization project or projects.
* The ability to have options for accomplish the translation work both on-line and in a collaborative way, and off-line where bandwidth and other factors may limit effective contribution.
I think that captures anything concrete said in this bug so far. Let me know if you think there are other requirements that can be abstracted, and lets trying to keep this bug away from saying thing like the work is "hard, tricky, messy, ugly, etc....' and focus on specific things that we can add or remove from the process or automation that are part of the tools.

I understand the concern of Mozilla about security and I also feel pride to see its popularity but what I felt that the requirement and concern we put here will enhance its experience at much higher level than what it is today for the localized product. We are the part of Mozilla community since 7-8 years and we were audience of several presentations and sessions on the l10n over the years and we already put our voices and concerns there in the different sessions also. But the process remained the same. So we just requested here again.
It is really surprising why cannot we take the words like "hard, tricky, messy, ugly, etc...." as a feedback, a simple and 'innocent' feedback. We are committed communities and we have proved it over the years by our contributions. When Mozilla tells to translate "Fast, flexible, secure", I translate it and believe me I also feel good like you by shouting these words in my language.
What you have put and added at https://wiki.mozilla.org/L10n:Tools_Vision are the very first things any global product having aim to localize in multiple language will do and if these are implemented lot of problems will certainly be solved. Thanks for the same.

> It is really surprising why cannot we take the words like "hard, tricky, messy,
> ugly, etc...." as a feedback, a simple and 'innocent' feedback.
What I meant to say was that it is fine to characterize the current tools in this way, but its not really helpful in designing the new systems just to leave it at those kind of high level descriptions. We need to articulate what it is about the system that makes things hard, tricky, messy.... and add more detail if we are going to try and make improvements.

We also don't have to limit this bug to just a major effort to redesign new tools. If anyone spots easy things that can be done to improve existing tools add a comment or spin off dependent bugs.
We do have people working to do that as well. For example:
* Arky and Alexandru/ have been working on improving some performance problems of narro.
* Dwayne is spinning up and effort to do some bug fixing and improvements on verbatim.
* And as mentioned Axel's been working on improvements to the dashboard and getting new team pages hosted on https://l10n.mozilla.org/

transifx (www.transifex.net)'s interface can be example of simple and easy translation interface. It has both options for offline and online translation . Mozilla has some projects over there
https://www.transifex.net/projects/p/mozilla/

(In reply to chris hofmann from comment #13)
> one additional requirement that needs to be added here is that the process
> needs to be secure and have variously levels of peer review. That's one big
> difference between a mediawiki kind of process and anything we need for
> Firefox. with mediawiki some pages get installed on some websites, and
> updates to those pages are relatively easy and can be accomplished quickly.
> With Firefox client software gets installed on 400 million systems around
> the world and prevention and recovery from any mistakes in that process
> plays a larger role.
The number of users is comparable. Wikipedias in all languages get about half a billion users a month.
This leaves two problems:
* Quality control: In the MediaWiki translation process it's possible not to deploy translated strings without review. It's optional and it is rarely enacted in the MediaWiki, because the infrastructure for updates is very robust and any problems can be fixed easily. But it nevertheless exists.
* Updating strings when a mistake is found in between releases: These can be updated in Firefox like security updates, if it's really needed. Something like this is done in Ubuntu. In Firefox it should be much smaller.
> There were a few more things that we talked about as differences between
> mediawiki and Firefox at the session on this at mozcamp berlin. does anyone
> have additional notes on that session that could feed into these
> requirements?
As far as I remember these were the main points. I believe that much of the problem here is social, as Alexandru says: "There is a big difference in new localization teams (that prefer ease, speed and online tools) and old teams (that prefer control, quality and mostly offline tools)."
The scariest problem with the old teams is that it's very hard for newcomers to join them. It also makes it hard to start new teams, because new teams want new tools and the old teams can't help them, because they use different tools. That's one big reason why everybody should use new tools and be less afraid of change.

(In reply to Robert Kaiser (:kairo@mozilla.com) from comment #10)
> (In reply to Amir Aharoni from comment #8)
> > "One size fits all" works wonderfully for MediaWiki: It has translations to
> > over 300 languages, over 100 of which are nearly complete and up-to-date,
> > and all the translators use the same tools.
>
> You can't compare them. MediaWiki is surely tiny in terms of localized
> strings compared to Firefox.
Did you count?
MediaWiki + the extensions that are needed to make Wikipedia work has roughly 10,000 strings. (I'm not counting other MediaWiki extensions, which are not used in Wikipedia and various related products, such as offline Wikipedia readers.)
How many does Firefox have?

(In reply to chris hofmann from comment #18)
> I added these items to https://wiki.mozilla.org/L10n:Tools_Vision
>
> * Automation for every reasonable task where automation can be provided
can we first get auto-merge strings from English to locale? It will reduce major workload.
1) check mismatch string in Browser (https://l10n.mozilla.org/dashboard/compare?run=215111)
2) find in English file and copy
3) paste in language file
if dashboard has ability to find mismatch strings, then it shouldn't be hard to replace.
this feature can be added to dashboard to give team to an option if want to auto-merge always Or sometime Or never (can be a button like Sign-off)

(In reply to Francesco Lodolo [:flod] from comment #27)
> (In reply to A S Alam from comment #26)
> > can we first get auto-merge strings from English to locale? It will reduce
> > major workload.
>
> It's already there (translation missing->fall back to English)
I think the requirement is more like.. rather than just list the changes, indicating what strings to add/remove/update in localized files which translator has to manually chane, there could be way to auto-merge into localized files. Much the way msgmerge works for PO files, or how KDE merges updated POT with existing PO to generate merged PO.
Steps being:
1) Identify modified files
2) Merge changes to target l10n files.
3) If existing entity is modified or deleted, mark it in some way (like fuzzy in PO files), so translator can review and change accordingly.

(In reply to Guntupalli Karunakar from comment #28)
> I think the requirement is more like.. rather than just list the changes,
> indicating what strings to add/remove/update in localized files which
> translator has to manually chane, there could be way to auto-merge into
> localized files. Much the way msgmerge works for PO files, or how KDE merges
> updated POT with existing PO to generate merged PO.
Sorry but I'm not following you.
a) compare-locales on Dashboard tells you which strings are missing and you need to work on
b) the build system creates a working application with English strings in place of missing strings, so that you can test your work even if it's not finished
Why would you need something that adds en-US strings to your locale? It would be just messy, and I personally don't like that external tools mess with my repository.
I "translate" Firefox as Firefox, with that approach I will never know if I've already translated that string because the build system added it to my locale. Again, messy.
You just a need a tool able to read en-US strings, translate and export them to your repository (the behaviour in case of missing strings is up to the tool, either exporting the original string or leaving it blank). The current situation is not optimal, but there's plenty of tools with these features to work on Mozilla softwares right now (I'm currently using Mozilla Translator).

Most of the projects such as Gnome, Fedora, Debian, KDE, LibreOffice, Sugar, XFCE are following the same. They just merge new strings and mark changes as fuzzy. Only KDE has a minimum translation limit to essential packages [1]. Mozilla can implement same for the main GUI to maintain GUI quality.
[1] http://i18n.kde.org/stats/gui/trunk-kde4/essential/

That's because they're based on PO files, so they work in terms of merge and fuzzy.
There are tools to convert Mozilla files to PO if you like that approach
http://translate.sourceforge.net/wiki/toolkit/moz2po
(In reply to Amir Aharoni from comment #24)
> The scariest problem with the old teams is that it's very hard for newcomers
> to join them. It also makes it hard to start new teams, because new teams
> want new tools and the old teams can't help them, because they use different
> tools. That's one big reason why everybody should use new tools and be less
> afraid of change.
Please, don't decide arbitrary what other people should do in their free time, since we're all volunteers ;-)
It's probably just me, but I have difficulties understanding what this bug is about after all these comments: should we discuss what a "Universal Mozilla l10n Tool" should be like? Is a bug the best place for this kind of discussion?

(In reply to Francesco Lodolo [:flod] from comment #31)
> It's probably just me, but I have difficulties understanding what this bug
> is about after all these comments: should we discuss what a "Universal
> Mozilla l10n Tool" should be like? Is a bug the best place for this kind of
> discussion?
Of course this is not a good place. We should move to dev.l10n. And keep this bug clean for development (or closing as invalid)

(my email):
My points:
1) I'm using MT (as Ricardo from my team es-ES and Flod from it AFAIK)
and I'm pretty happy with the development, the easy of use and I do
not have any plans to move to another tool in a near future. But IF
THE MASTER TOOL is made and it's a massive win, maybe I'll do it.
2) Yes, I think we should change some of the ways we translate:
2.1) We need more comunication (I have not complained about mobile
before, but I second every single comment from Flod in the past mails)
2.2) We need better sign off pages (this is mostly working, thanks l10n team!)

(In reply to Francesco Lodolo [:flod] from comment #31)
> That's because they're based on PO files, so they work in terms of merge and
> fuzzy.
>
> There are tools to convert Mozilla files to PO if you like that approach
> http://translate.sourceforge.net/wiki/toolkit/moz2po
JFYI, LibreOffice has solved this problem by using gettext files for the translation process and converting those .po files to native ones during compilation. This has not only solved the inability to have fuzzy strings in translation, but also vastly expanded the pool of tools that the localizer can choose.
WRT .dtd and .properties: I believe these will be gone sooner or later, as L20n is underway. What I'm more concerned about though is that from following (well, skimming through) the L20n mailing list, I got the impression that this format will be even techier and harder to deal with than the current ones. I know, it will be more powerful than ever, but at least to me, it looks rather scary and complicated. I'm still quite positive that I can get used to it, if needed, but I'm a techie, unlike the localizers complaining about Mercurial or SVN on this bug.
I remember the main argument against gettext a few years back was that strings in .po were lacking context information, so hacks like msgtxt "context|string" were needed. It hasn't been the case since msgctxt was introduced 4 years or so ago, which makes me wonder a bit, if gettext is still such a bad alternative to in-house development.

gettext, and more importantly, all translations tools that work on top of it, don't distinguish between a missing localization and a properly done localization which happens to be the same character sequence as the original English string. The '&' accesskey story is also a bad decision, looking at how much problems that's causing when converted to our formats today.
l20n won't have those deficiencies, and will capable of solving a good deal of real world problems we have today (see AccessFoo.properties thread in .l10n, to name one).
gettext has a tradition to just add stuff on top of what's there with more magic comments etc. I'm very well aware of that. The real blocker is not so much gettext and PO, but that adding stuff on top doesn't get the tools people use to support that stuff on top. And moving that ecosystem will be much harder than moving our own.

(In reply to Axel Hecht [:Pike] from comment #36)
> gettext, and more importantly, all translations tools that work on top of
> it, don't distinguish between a missing localization and a properly done
> localization which happens to be the same character sequence as the original
> English string.
Either I'm not following you here, or you're plain wrong. Gettext *does* distinguish between a string, which is marked as fuzzy, and which is not. If it just happens to be the same character sequence as the original English string, but it's not fuzzy, then AFAIK it is being treated as the correct l10n.
> The '&' accesskey story is also a bad decision, looking at
> how much problems that's causing when converted to our formats today.
Not sure what you mean by this either. IIRC, the accesskey character can be specified in an optional header in the .po file, but it's not mandatory, and I don't think gettext itself does anything magic with that character. It's the tools that may or may not treat that character specifically.
I don't think it would even be problematic to specify the accesskey separately, as we do in .properties. Although I personally don't think specifying it inline is inferior. Both options have cons and pros.
> l20n won't have those deficiencies,
...which I'm not convinced of... :)
> and will capable of solving a good deal
> of real world problems we have today (see AccessFoo.properties thread in
> .l10n, to name one).
Like I said in the thread, I don't see that issue as a really big problem. An overhead of 3*65 trivial strings isn't that unmanageable, compared to 4+65 base strings that we get to have anyway (of course, 65*65 is worth thinking about, but how usual is that case?).
> gettext has a tradition to just add stuff on top of what's there with more
> magic comments etc. I'm very well aware of that. The real blocker is not so
> much gettext and PO, but that adding stuff on top doesn't get the tools
> people use to support that stuff on top. And moving that ecosystem will be
> much harder than moving our own.
The people who use those tools get them to support new features. I'm pretty sure without even checking that by now, all of the popular and maintained .po editors are aware of msgctxt and can present it as a hint to the user. How wrong am I?
And by "our own" ecosystem, I assume you mean something we can control. You're right on that one, but can anyone stop Mozilla from creating and maintaining and moving forward its own gettext editor? No. We can still have "our own" either way. The only thing is that with gettext, the localizer could at least potentially benefit from a much broader choice of tools (some of which admittedly might not support some of the features we would use), and with L20n, that broader choice is simply nonexistant.

Another important area to notice here that we have several *Mo projects growing under Mozilla and they are starting own localization efforts without taking into the consideration of the Mozilla l10n work-flow currently used. There is a great need to sync all the efforts and conversation at one place.

Given the prevalence of Locamotion these which offers the usual tools available in terms of translation memory etc associated with po files, shouldn't this be marked FIXED?
The original 4 points were "Time Consuming" "Translation Memory" "Inconsistency in Translation: Due to no use of memory" "Glossary". I would say all those are fixed if the locale moves to Locamotion.

(In reply to Axel Hecht [:Pike] from comment #41)
> I don't have a good suggestion where this bug should go, but it has nothing
> to do with marketing.
I agree with Pike, the bug has nothing to do with marketing.

You need to log in
before you can comment on or make changes to this bug.