(copying in some other useful background from email)
Some open questions:
* Is it worth compressing the JSON on disk, via a very fast compression algorithm like Snappy? (See below)
* Can we make the data store more resilient/fault tolerant? Do we need to?
* How dependant is this work on bug 832664?
* This would make 3rd party tampering with the on-disk data *much* easier. What can we do to counter that?
* Does this mean we can avoid polluting the codebase with Promises?
Regarding extensions.ini:
What Dave said - however, that amount of data does worry me a bit. The size of the SQLite databases isn't huge (most of the size on disk is overhead from SQLite), but the actual data is still at least 2 orders of magnitude larger than extensions.ini. I did a little experiment, dumping the contents of extensions.sqlite and addons.sqlite from my main profile, and compared the data size:
extensions.ini - 1.2Kb
extensions.sqlite in JSON - 138Kb
addons.sqlite in JSON - 79Kb
Total of both DBs in JSON - 217Kb
Loading and parsing 217Kb of JSON is going to be a lot slower than a 1.2Kb INI file. And we'll inevitably want to add more data to that.
It would also mean that, for the edge-case of a very large number of installed extensions, we'll end up loading a multi-megabyte JSON file on startup. Now, we shouldn't optimise for that case... but creating such a pathological startup situation is worrisome.
Getting rid of extensions.ini would also make it far more likely to always take an I/O hit on startup. At the moment, if there are no extensions enabled we only need to do a single fstat to check that extensions.ini doesn't exist. Even with no profile-installed extensions, globally-installed extensions are not uncommon - so we'll end up reading/parsing the data on startup more often for people that don't use add-ons.
Saying that, it would be nice to achieve - assuming it doesn't end up regressing various situations just for the sake of code niceness.
As a bonus, I think it would make it easier to solve the issue of writing data on shutdown (bug 731489).
- Blair

Database schema notes:
The RDF data structure in the addon's install.rdf is copied almost exactly into the XPIProvider.jsm/AddonInternal data structure, in XPIProvider.jsm/loadManifestFromRDF. XPIProvider adds a few fields for internal use.
The fields that need to be preserved in schema version changes are listed in XPIProvider.DB_MIGRATE_METADATA
The XPIDatabase routines implemented in XPIProviderUtils.js use XPIProvider.jsm/DBAddonInternal to hold almost exactly the same data as AddonInternal, restored from the database.
Both of these object types are then wrapped in XPIProvider.jsm/AddonWrapper to provide an external API
As a first cut, it looks like saving a set of (possibly slightly redacted) AddonInternal along with some metadata into a JSON file should work.
Here's a breakdown of the info in AddonInternal. Star in the RDF column means it's read directly from the addon's install.rdf or chrome.manifest; the MIG column means it's important state that gets migrated on version updates.
RDF MIG Database:
* id
* syncGUID // GUID assigned on first install
location // calculated from install path; e.g. app-profile, app-global
* version
* type
* internalName
* updateURL
* updateKey
* optionsURL
* optionsType
* aboutURL
* iconURL
* icon64URL
* defaultLocale
visible // calculated based on priority of install locations
active // calculated based on visible, userDisabled, softDisabled, appDisabled, pendingInstall
* userDisabled
appDisabled // calculated on version compatibility, blocklist
pendingUninstall // set by user, perhaps should be migrated for case of uninstall & DB upgrade in same restart
descriptor // full path of install location
* installDate
updateDate // perhaps should be migrated
* applyBackgroundUpdates
* bootstrap
* skinnable
size // calculated from file system NOTE: does a directory scan in XPIProvider.loadManifestFromDir()
* sourceURI
* releaseNotesURI
* softDisabled
* isForeignInstall
* hasBinaryComponents // parsed from chrome.manifest in addon
* strictCompatibility // default value from COMPATIBLE_BY_DEFAULT_TYPES
Each addon has a default locale (with the relevant fields stored in the top level of the addon.rdf structure) and a possibly empty set of localizations.
* locale+ // e.g. ["en-US", "en-CA"]
* name
* description
* creator
* homepageURL
* developer+
* translator+
* contributor+

Created attachment 734926[details][diff][review]
WIP - make some tests fail fast instead of hanging on JS exceptions
Some of the existing tests get JS exceptions in callbacks when the XPI database fails to find the requested addon; this causes the test to hang. Add checks for some of these cases so the tests fail fast when the XPI database is broken.

Just to make sure its 100% clear, since RDF was mentioned in comment 2: This needs to not touch any code related to RDF parsing (it needs to stay independent from the DB, for when we start supporting a new JSON-based manifest format).

Created attachment 735153[details][diff][review]
WIP - save and restore JSON version of the XPI addon data
Rough version of saving and restoring XPI data in JSON format, so far just synchronous and tested manually from the JS scratchpad.

(In reply to Blair McBride [:Unfocused] (Back from the dead. Mostly.) from comment #4)
> Just to make sure its 100% clear, since RDF was mentioned in comment 2: This
> needs to not touch any code related to RDF parsing (it needs to stay
> independent from the DB, for when we start supporting a new JSON-based
> manifest format).
Absolutely. The only RDF I was thinking of removing was the old DB upgrade code, for handling extensions.rdf => extensions.sqlite. It turns out that I probably won't substantially redesign the DB upgrade, so it will likely take no extra effort to preserve the old DB upgrade; we can make the choice either way about whether to keep it around.

Created attachment 739185[details][diff][review]
WIP: test Make some tests more async to prevent JS stack overflows
This patch contains a bunch of debug junk and some valuable changes. Nothing in this depends on the main patch; at some point I'll clean it up and we can land it early if it will help anyone else.

In the course of cleaning up test failures in the new JSON version of the code, I've come across one place where the test is sensitive to a change where the new code is synchronous and the old code was (usually) async. I've figured out where the failure is, but I'm not sure whether the problem is that the test is too sensitive, or that the original code is doing things in the wrong order.
The test in question is toolkit/mozapps/extensions/test/xpcshell/test_bug675371.js.
In short, the test adds a couple of addons, waits for several addon-related events to happen, ending with InstallListener.onInstallEnded, and then checks for several things, including that files in the addon's XPI are correctly registered with the component manager.
The problem is that the onInstallEnded notification is sent before the addon's bootstrap "startup" method is called, around http://dxr.mozilla.org/mozilla-central/toolkit/mozapps/extensions/XPIProvider.jsm#l5144, and the addon's manifest is not registered with the component manager until we call XPI_callBootstrapMethod specifically for the "startup" method at http://dxr.mozilla.org/mozilla-central/toolkit/mozapps/extensions/XPIProvider.jsm#l3773
The test breaks because after it gets onInstallEnded, it calls a sometimes-async API (AddonManager.getAddonByID) that happens to always behave async in the test case; that gives the bootstrap startup a chance to run before the test is performed. In the new implementation that call is currently always synchronous, so the test fails.
I can work around the problem by going around the event loop in the test, but that feels dirty; is there a better place to register the XPI with the component manager so that it doesn't race with the onInstallEnded notification?
See also http://dxr.mozilla.org/mozilla-central/addon-sdk/source/lib/sdk/addon/installer.js#l50, which fixes bug 749745 by doing a setTimeout(0) to let the startup method run before other code that expects the addon to be completely ready.

Created attachment 740282[details][diff][review]
WIP v3 - add more event loop spinning in tests to work around stack overflows
Most of the tests run through to completion now, though not 100% reliably. I'm going to try working with an optimized build to see if that helps.
The root of the problem is that the async APIs in AddonManager use a helper object that implements iteration over asynchronous operations by tail recursion; particularly in the case of AMI_getAddonsByIDs() it iterates over a list of addon IDs, calling AMI_getAddonByID() which then recursively iterates over all the addon providers, leading to O(n * m) stack inflation, because the new XPI provider never needs to async wait for the database.
It's still not clear whether there's an actual bug in JS making this problem so annoying; I've had a couple of conversations in #jsapi with followup in bug 762618, but no real progress.

(In reply to :Irving Reid from comment #11)
> Most of the tests run through to completion now, though not 100% reliably.
> The root of the problem is that the async APIs in AddonManager use a helper
> object that implements iteration over asynchronous operations by tail
> recursion
I went through a similar hassle in bug 699839, and I originally worked around
this by putting some code that spinned the event loop, or that waited for the
next tick, in tests. Unfortunately, I don't have a copy of that patch, and the
latest version used a different technique (see the waitPending calls in the
latest patch).

(Sorry for delays - I've been off sick/in hospital. I'll have a look at the WiP patch when I can.)
(In reply to :Irving Reid from comment #10)
> The problem is that the onInstallEnded notification is sent before the
> addon's bootstrap "startup" method is called
This was changed to be that way about a year ago - firing onInstallEnded after startup() was breaking things. See:
http://theunfocused.net/2012/03/06/addons-manager-api-change/> I can work around the problem by going around the event loop in the test,
> but that feels dirty; is there a better place to register the XPI with the
> component manager so that it doesn't race with the onInstallEnded
> notification?
>
> See also
> http://dxr.mozilla.org/mozilla-central/addon-sdk/source/lib/sdk/addon/
> installer.js#l50, which fixes bug 749745 by doing a setTimeout(0) to let the
> startup method run before other code that expects the addon to be completely
> ready.
Ew. Maybe we need to look into providing a new event for that in the future. Filed bug 864650.
In the meantime, the test add-on can set a preference or send an observer notification that the test can listen for. We use this type of pattern already in a couple of places.

(In reply to :Irving Reid from comment #11)
> The root of the problem is that the async APIs in AddonManager use a helper
> object that implements iteration over asynchronous operations by tail
> recursion
Ah. That sounds like it's fixable within AsyncObjectCaller, by having callNext() wait for the next tick before calling listener.nextObject().

At this point I don't think the JSON based implementation will land in the 23 cycle, so I'd recommend not blocking.
We're doing our best to preserve APIs, so unless you're reaching into internals there should not be much change when we land.

Comment on attachment 740300[details][diff][review]
WIP 2 - save XPI database as JSON
Review of attachment 740300[details][diff][review]:
-----------------------------------------------------------------
Still quite a ways to go, but this seems to be shaping up nicely. General direction looks good for a first iteration, though there will be some challenges when saving/readings goes fully async and some resiliency gets built in. I'd still like to eventually look at refactoring much of the XPIDatabase API.
I think it'd be worth abstracting out all sqlite migration code into it's own lazy-loaded JSM - that kind of thing really impacts mobile (and it would make the XPIDatabase code *so* much more manageable).
::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +227,5 @@
> aFile.path, e);
> }
> }
>
> +// XXX IRVING is it worth taking SafeInstallOperation off main thread, into OS.File or whereever?
Yes, but IMO its out of scope here.
@@ +921,5 @@
>
> try {
> let addon = loadManifestFromRDF(Services.io.newFileURI(file), bis);
> addon._sourceBundle = aDir.clone();
> + addon.size = getFileSize(aDir); // XXX IRVING do this async? remove it?
This is a public API, we can't remove it. Could change the API to be async, but the size should never change - so its best to be done just once here, to keep things simple.
Reading the manifest needs to be eventually make async anyway, should be easy to do then (but again, this is out of scope of this bug).
::: toolkit/mozapps/extensions/XPIProviderUtils.js
@@ +159,5 @@
> this.count++;
> let self = this;
> XPIDatabase.makeAddonFromRowAsync(row, function handleResult_makeAddonFromRowAsync(aAddon) {
> function completeAddon(aRepositoryAddon) {
> + // XXX IRVING new JSON DB loading code no longer goes through this path; what should I do about _repositoryAddon?
It has to be set on that object somehow - AddonWrapper uses it extensively.
@@ +503,5 @@
> + this.addonDB = {};
> + return false;
> + }
> + if (inputAddons.schemaVersion != DB_SCHEMA) {
> + // XXX UPGRADE FROM PREVIOUS VERSION OF JSON DB
Note that this may not necessarily be an upgrade - we also need to support downgrades.
@@ +510,5 @@
> + }
> + // If we got here, we probably have good data
> + // Make AddonInternal instances from the loaded data and save them
> + delete this.addonDB;
> + let addonDB = {}
Any reason why this can't be a Map?
@@ +972,4 @@
> });
> + // XXX IRVING removed an async callback when the database was closed
> + // XXX do we want to keep the ability to async flush extensions.json
> + // XXX and then call back?
IMO, yes.
@@ +1290,5 @@
> },
>
> /**
> * Synchronously sets the Sync GUID for an add-on.
> + * XXX IRVING could replace this with setAddonProperties
Sounds reasonable.
@@ +1305,5 @@
> },
>
> /**
> * Synchronously sets the file descriptor for an add-on.
> + * XXX IRVING could replace this with setAddonProperties
Ditto.
@@ +1324,5 @@
> *
> * @param aAddon
> * The DBAddonInternal to update
> */
> + updateAddonActive: function XPIDB_updateAddonActive(aAddon, aActive) {
This too.

Created attachment 746060[details][diff][review]
TESTS WIP v4 - Make test suite more async-safe
Cleaned almost all of the debug dumps out, and fixed a few more places where the tests weren't quite async-safe. This can be applied without the JSON changes, to ensure that we're comparing results from as close as possible to the same test suite before and after.

Created attachment 746063[details][diff][review]
JSON WIP v3: Save and restore XPI database in JSON format
This should be applied on top of TESTS. This patch passes the core xpcshell tests for extensions, but not the tests related to schema or database version changes.

Created attachment 746070[details][diff][review]
UPGRADE wip V1 - Upgrade sqlite and RDF databases to JSON
This should be applied after JSON. This patch removes most of the old sqlite code, except for that required to read and upgrade old databases. This passes all but five of the existing tests.
Still to do:
Clean up the remaining test failures
Implement the backward/forward compatibility for JSON databases as described at the end of https://bugzilla.mozilla.org/show_bug.cgi?id=853389#c10
Replace existing begin / end transaction model of database management with clean/dirty and asynchronous writing of JSON file

(In reply to :Irving Reid from comment #20)
> > Any reason why this can't be a Map?
>
> The documentation for Map
> (https://developer.mozilla.org/en-US/docs/JavaScript/Reference/
> Global_Objects/Map) says it's experimental, not for production code, etc.
Heh, that warning is needlessly scary IMO. We're already using it extensively in production code, and any new code I try to get using Map/Set where appropriate (and slowly convert all code to use them).

Created attachment 748038[details][diff][review]
TESTS WIP v5 - now with even more do_execute_soon
This unwinds the stack often enough that I can pass the test suite without my stack saving hacks to AddonManager.jsm, which makes them much more useful to everyone else.

Created attachment 748051[details][diff][review]
UPGRADE wip V2 - Upgrade sqlite and RDF databases to JSON
Now handling all the upgrade scenarios from sqlite to JSON storage.
There are two failing tests that I haven't been able to sort out, and I'd like input from Dave and Blair (and others, if they want).
test_AddonRepository_cache.js is failing in run_test_17, because the addon manager is returning much more information than the test expects for one of the addons. I don't understand the behaviour of the AddonManager cache well enough to make sense of what the expected result should be, and what code paths the test is meant to cover.
test_bug659772.js is also failing, and I mostly understand why. As far as I can tell, the test sets up a scenario where:
1) We have downloaded and staged an addon update
2) We also have compatibility override data for that update
3) The browser is updated to a new version that changes the XPI database schema AND requires additional fields not extracted from the addon's manifest when the upgrade was staged
The code in question is related to bug 680802 and bug 664833, despite the name of the test file.
The behaviour I'm seeing is that in the original code, the database upgrade triggers a complete rebuild of the XPI database, with the addon state recovered by the database upgrade code. In the course of this, the compatibility overrides are applied to the addon.
In the new code I'm working to implement forward/backward compatibility in the JSON format, so we don't need to do a database rebuild. Because of this, the staged addon upgrade happens through a different code path that *does not* apply the compatibility overrides.
This leaves me wondering whether
a) I'm misunderstanding the desired behaviour of the interaction between compatibility overrides and addon upgrades
b) there's a latent bug where an addon upgrade (with compat overrides) and app upgrade at the same time, that *doesn't* require a database rebuild, is doing the wrong thing
c) I mucked up the logic of the non-rebuild code path so that it stopped applying compat overrides

(In reply to :Irving Reid from comment #26)
> Created attachment 748051[details][diff][review]
> UPGRADE wip V2 - Upgrade sqlite and RDF databases to JSON
>
> Now handling all the upgrade scenarios from sqlite to JSON storage.
>
> There are two failing tests that I haven't been able to sort out, and I'd
> like input from Dave and Blair (and others, if they want).
>
> test_AddonRepository_cache.js is failing in run_test_17, because the addon
> manager is returning much more information than the test expects for one of
> the addons. I don't understand the behaviour of the AddonManager cache well
> enough to make sense of what the expected result should be, and what code
> paths the test is meant to cover.
That tests verifies that we can set a pref and limit the types of add-ons that are cached in the AddonRepository. The test tries to only cache extensions and not themes so what we should see in the results is an extension with all properties from the XPI and the repository and two themes with only properties that we have locally as opposed to the previous test that should also see repository information for those themes. You say its seeing more information? If its the repository information for the themes that its seeing then you're not correctly restricting caching based on the add-on type.
> test_bug659772.js is also failing, and I mostly understand why. As far as I
> can tell, the test sets up a scenario where:
>
> 1) We have downloaded and staged an addon update
> 2) We also have compatibility override data for that update
> 3) The browser is updated to a new version that changes the XPI database
> schema AND requires additional fields not extracted from the addon's
> manifest when the upgrade was staged
Looks about right
> In the new code I'm working to implement forward/backward compatibility in
> the JSON format, so we don't need to do a database rebuild. Because of this,
> the staged addon upgrade happens through a different code path that *does
> not* apply the compatibility overrides.
That sounds broken to me, why aren't you applying the compatibility update?

Created attachment 752204[details][diff][review]
TESTS WIP v6 - minor tweak to an error message
Not sure whether Mossop or Unfocused would rather have the r? on this, swap if you want.
This goes through Try cleanly on all platforms, modulo some known intermittent failures: https://tbpl.mozilla.org/?tree=Try&rev=b2ca8ddd38c3
I was hoping this would help with the persistent test failures on Android (bug 676992 et al, see toolkit/mozapps/extensions/test/xpcshell/xpcshell.ini), but the extensions tests appear to be completely disabled in Android try builds so I'll need to revisit that later to re-enable them properly.

Comment on attachment 752204[details][diff][review]
TESTS WIP v6 - minor tweak to an error message
Blair is the one who has to be in charge of this long term so I suggest that he do most of the reviews, I can do bits and pieces where necessary though.

(In reply to Dave Townsend (:Mossop) from comment #28)
> (In reply to :Irving Reid from comment #26)
...
> > test_bug659772.js is also failing, and I mostly understand why. As far as I
> > can tell, the test sets up a scenario where:
> >
> > 1) We have downloaded and staged an addon update
> > 2) We also have compatibility override data for that update
> > 3) The browser is updated to a new version that changes the XPI database
> > schema AND requires additional fields not extracted from the addon's
> > manifest when the upgrade was staged
>
> Looks about right
>
> > In the new code I'm working to implement forward/backward compatibility in
> > the JSON format, so we don't need to do a database rebuild. Because of this,
> > the staged addon upgrade happens through a different code path that *does
> > not* apply the compatibility overrides.
>
> That sounds broken to me, why aren't you applying the compatibility update?
OK, I may have misunderstood the cause here.
I'm digging through this again, and here's the sequence of events I see:
- We start up xpcshell with application version 1
- A packed XPI extension, addon3@... is installed (created with writeInstallRDFForExtension, compatible with version 1 only)
- An update to addon3, which requires restart, is installed from a separate XPI file (test/xpcshell/addons/test_bug659772.xpi). This has the same extension version (2.0) but claims to be compatible with xpcshell version 2 only
- We save a .json of the xpi metadata, including appDisabled == true because the app is version 1 and the XPI specifies version 2
- Shut down and modify the .json in place, to simulate the bug scenario described in 3) above
- Force a DB schema migration and restart
- In XPIProvider.checkForChanges, XPIProvider.processPendingFileChanges sees the staged add-on and installs it (including the cached metadata), saves metadata in aManifests[location][id]
- Later in XPIProvider.checkForChanges, XPIProvider.getInstallLocationStates (the recursive directory scan) sees the changed files installed by processPendingFileChanges - this may be a separate bug where we do extra work in the face of staged installs?
- XPIProvider.processFileChanges gets both the states from the recursive scan and the manifests from processPendingFileChanges
This is where the behaviour diverges. In the code *before* my changes, because the database was discarded and is being rebuilt, we end up in addMetadata()
- Get the cached manifest for the updated addon
- Get the migration data from the old DB
- Because the add-on version is the same, **copy the application compatibility from the old DB over the new compatibility data from the updated add-on**, which overwrites the targetApplication from compatible with only xpcshell version 2, back to compatible with version 1.
- Reset appDisabled to false, since our targetApplication version now matches
- test (do_check_false(a3.appDisabled)) passes
In the new code, since we don't discard the database and rebuild from migration data, the add-on update proceeds normally; because the newly installed version of the add-on is only compatible with xpcshell v2, and we're still running as v1, the add-on is kept as appDisabled == true, which causes the test to fail.
So my suspicion is that this is a case where the existing test, and the old code, was at least somewhat broken. The old code would give a different result for whether the updated add-on was compatible depending on whether the DB is being rebuilt at the same time, which doesn't seem right to me.
There's a comment above the code at http://dxr.mozilla.org/mozilla-central/toolkit/mozapps/extensions/XPIProvider.jsm#l2714 suggesting that the add-on version check isn't a perfect way to control whether the add-on's compatibility data gets restored from the DB migration data.
It's a fairly obscure failure; this only happens when we need a DB upgrade (including the upgrade from sqlite to JSON) *and* there's a pending add-on install that changes application compatibility but does *not* change the add-on version, so I'm not sure that it's too important to fix it.

(In reply to Dave Townsend (:Mossop) from comment #28)
> (In reply to :Irving Reid from comment #26)
...
> > test_AddonRepository_cache.js is failing in run_test_17, because the addon
> > manager is returning much more information than the test expects for one of
> > the addons. I don't understand the behaviour of the AddonManager cache well
> > enough to make sense of what the expected result should be, and what code
> > paths the test is meant to cover.
>
> That tests verifies that we can set a pref and limit the types of add-ons
> that are cached in the AddonRepository. The test tries to only cache
> extensions and not themes so what we should see in the results is an
> extension with all properties from the XPI and the repository and two themes
> with only properties that we have locally as opposed to the previous test
> that should also see repository information for those themes. You say its
> seeing more information? If its the repository information for the themes
> that its seeing then you're not correctly restricting caching based on the
> add-on type.
I've sorted the rest of the way through this one. Basically, the old XPIDatabase code would always reset _repositoryAddon every time an XPI addon was fetched, via AsyncAddonListCallback. This makes XPIProvider.updateAddonRepositoryData mostly not do anything, since
(a) it calls getVisibleAddons() which already resets the _repositoryAddon and
(b) it never removes an existing value of _repositoryAddon, even if there is no longer one in the cache (but this doesn't matter since getVisibleAddons already removed it)
(c) any subsequent retrieval of the addon will reset _repositoryAddon again anyway
updateAddonRepositoryData does do updateAddonDisabledState() iff the new cache contains an entry for the addon, which AsyncAddonListCallback doesn't do, so the behaviour is not exactly the same for the two paths.
In fact, test_AddonRepository_cache.js/run_test_17 resets _repositoryAddon five times, twice via AddonManager indirectly calling AsyncAddonListCallback before the cache is cleared, twice in updateAddonRepositoryData (indirectly via getVisibleAddons/AsyncAddonListCallback and then directly), and once again through getAddonsByID/AsyncAddonListCallback after.
The logic change that broke the test was that my replacement for AsyncAddonListCallback only fetches _repositoryAddon if we don't already have a value for it; this leads to me holding onto the older cached data in this test case. I can fix it for the time being by always resetting _repositoryAddon, but in the medium term this whole mechanism cries out for redesign.

(In reply to :Irving Reid from comment #31)
> (In reply to Dave Townsend (:Mossop) from comment #28)
> > (In reply to :Irving Reid from comment #26)
> ...
> > > test_bug659772.js is also failing, and I mostly understand why. As far as I
> > > can tell, the test sets up a scenario where:
> > >
> > > 1) We have downloaded and staged an addon update
> > > 2) We also have compatibility override data for that update
> > > 3) The browser is updated to a new version that changes the XPI database
> > > schema AND requires additional fields not extracted from the addon's
> > > manifest when the upgrade was staged
> >
> > Looks about right
> >
> > > In the new code I'm working to implement forward/backward compatibility in
> > > the JSON format, so we don't need to do a database rebuild. Because of this,
> > > the staged addon upgrade happens through a different code path that *does
> > > not* apply the compatibility overrides.
> >
> > That sounds broken to me, why aren't you applying the compatibility update?
>
> OK, I may have misunderstood the cause here.
>
> I'm digging through this again, and here's the sequence of events I see:
>
> - We start up xpcshell with application version 1
>
> - A packed XPI extension, addon3@... is installed (created with
> writeInstallRDFForExtension, compatible with version 1 only)
>
> - An update to addon3, which requires restart, is installed from a separate
> XPI file (test/xpcshell/addons/test_bug659772.xpi). This has the same
> extension version (2.0) but claims to be compatible with xpcshell version 2
> only
>
> - We save a .json of the xpi metadata, including appDisabled == true
> because the app is version 1 and the XPI specifies version 2
>
> - Shut down and modify the .json in place, to simulate the bug scenario
> described in 3) above
>
> - Force a DB schema migration and restart
>
> - In XPIProvider.checkForChanges, XPIProvider.processPendingFileChanges
> sees the staged add-on and installs it (including the cached metadata),
> saves metadata in aManifests[location][id]
>
> - Later in XPIProvider.checkForChanges,
> XPIProvider.getInstallLocationStates (the recursive directory scan) sees the
> changed files installed by processPendingFileChanges - this may be a
> separate bug where we do extra work in the face of staged installs?
>
> - XPIProvider.processFileChanges gets both the states from the recursive
> scan and the manifests from processPendingFileChanges
>
>
> This is where the behaviour diverges. In the code *before* my changes,
> because the database was discarded and is being rebuilt, we end up in
> addMetadata()
>
> - Get the cached manifest for the updated addon
>
> - Get the migration data from the old DB
>
> - Because the add-on version is the same, **copy the application
> compatibility from the old DB over the new compatibility data from the
> updated add-on**, which overwrites the targetApplication from compatible
> with only xpcshell version 2, back to compatible with version 1.
>
> - Reset appDisabled to false, since our targetApplication version now
> matches
>
> - test (do_check_false(a3.appDisabled)) passes
>
>
> In the new code, since we don't discard the database and rebuild from
> migration data, the add-on update proceeds normally; because the newly
> installed version of the add-on is only compatible with xpcshell v2, and
> we're still running as v1, the add-on is kept as appDisabled == true, which
> causes the test to fail.
>
> So my suspicion is that this is a case where the existing test, and the old
> code, was at least somewhat broken. The old code would give a different
> result for whether the updated add-on was compatible depending on whether
> the DB is being rebuilt at the same time, which doesn't seem right to me.
Yeah this doesn't make a lot of sense. I suspect it was a bit of a hack to keep things at least sane since without that fix things were breaking horribly in this case.
I think the important scenario to check is this:
Have add-on A installed, compatible with version 1.
Stage the install of the same version of add-on A which also claims compatibility with version 1 but has an update.rdf that extends that to also be compatible with version 2.
Upgrade to version 2 forcing a DB schema change.
Add-on A should still be installed and compatible.

Created attachment 754908[details][diff][review]
JSON v4: Save and restore XPI database in JSON format
Cleaned up and ready for a first review pass.
There are a bunch of things left half done in this patch and then finished off in the UPGRADE patch; let me know if you'd prefer to review a merged patch.

Created attachment 754912[details][diff][review]
UPGRADE V3 - Upgrade sqlite and RDF databases to JSON
Handle database version upgrades, clean up all but one test failure (I'm not yet comfortable with the issues discussed in Comment 32 / Comment 33).
Remaining XXX comments in this patch will be addressed in additional patches to be posted this week.

Created attachment 757910[details][diff][review]
DeferredSave WIP v1 - module to handle deferred async write of JSON
Here's a first cut at moving the deferred JSON save into a helper module. The test file is just a stub of the harness for now; I'm working on it. This implements the behaviour discussed in http://www.controlledflight.ca/2013/05/31/saving-browser-state-asynchronously/
The basic API is:
let saver = new DeferredSave(aFile, aDataCallback, aDelay);
aFile: nsIFile where the data should be saved
function aDataCallback() -> string: called by the saver when it wants a snapshot of the data to be saved
aDelay: number of milliseconds to delay before starting write, in order to batch up changes
saver.saveChanges(aCallback)
function aCallback(error): optional callback function for when the save completes. If the save is successful, null is passed for the error argument. If the save fails, the exception is passed as the error argument.
Marks the saver "dirty" and tells the saver to start the delay timer; when the timer runs out, the saver will call aDataCallback() to get a snapshot and start writing. If aCallback is non-null, the saver will call back aCallback when the write completes.
Note that if there is more than one call to saveChanges() before the data is written, *all* the callbacks are remembered and called - this might not be the most useful design
saver.flush(aCallback)
Calls aDataCallback() to get a snapshot if the saver is dirty, waits for any in-progress write to complete, writes a final copy of the data to disk if the saver was dirty, and then calls the callback.

Created attachment 758123[details][diff][review]
DeferredSave WIP v2 - deferred async writes module, now with unit tests
Both the module and the tests are still based on callbacks rather than Promises.

Comment on attachment 754908[details][diff][review]
JSON v4: Save and restore XPI database in JSON format
Review of attachment 754908[details][diff][review]:
-----------------------------------------------------------------
::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +5029,5 @@
> + /*
> + * JSON.stringify filter to save only the relevant parts of an AddonInternal
> + * when recording metadata for a deferred installation
> + */
> + function install_metadata_filter(aKey, aValue) {
Why move this to this function, instead of keeping it in AddonInternal.toJSON()? toJSON feels neater, and makes for nice debug possibilities (ditto for DBAddonInternal).
::: toolkit/mozapps/extensions/XPIProviderUtils.js
@@ +219,5 @@
> + * @param aCallback
> + * Function with signature f([aMethod(object)]), called when all values
> + * are available
> + */
> +function asyncMap(aObjects, aMethod, aCallback) {
This makes me wish we had Mike's Async.jsm in the tree already (see bug 870774).
@@ +372,5 @@
> + * @param aLoaded
> + * the addon data fields loaded from the other source
> + * @return a DBAddonInternal populated with the loaded data
> + */
> +function AddonFromSave(aLoaded) {
The name of this irks me. createDBAddonInternal or something similar? Or maybe even just move it all to DBAddonInternal's constructor.
@@ +528,5 @@
> +
> + /**
> + * Open and parse the JSON XPI extensions database.
> + * @return true: the DB was successfully loaded
> + * false: The DB either needs upgrade or did not exist at all.
The existing AddonRepository code deletes the DB file when it knows it's meant to be empty, which saves I/O - the new JSON DB code has been adapted to do the same. XPIDatabase has never done that, but I think it'd be worth looking into.
@@ +566,5 @@
> + }
> + // If we got here, we probably have good data
> + // Make AddonInternal instances from the loaded data and save them
> + delete this.addonDB;
> + let addonDB = {}
Over in bug 853389, AddonRepository is storing the add-ons in-memory in a Map, which is specifically designed for this use-case. On disk, it's storing them in an array, which will save some bytes (and we don't benefit from being able to index by ID for the on-disk format anyway). Think we should do the same here.
@@ +1013,5 @@
> // If we are running with an in-memory database then force a new
> // extensions.ini to be written to disk on the next startup
> + // XXX IRVING special case for if we fail to save extensions.json?
> + // XXX maybe doesn't need to be at shutdown?
> + // if (!this.connection.databaseFile)
Yea, probably safest to still keep this - at least until after all this work is done, and we discover its safe to not have it.
But if its there, it does need to be done on shutdown - there's a few places where we set this to false during runtime. Alternatively, we could use a separate pref just for this situation (probably more reliable).
@@ +1028,4 @@
> });
> + // XXX IRVING removed an async callback when the database was closed
> + // XXX do we want to keep the ability to async flush extensions.json
> + // XXX and then call back?
Yes - useful for testing.
@@ +1051,5 @@
> return [];
>
> + let locations = {};
> + for each (let addon in this.addonDB) {
> + locations[addon.location] = 1;
You're using locations as a set, better off just using a Set. Can easily convert a Set to an array via: [...mySet]
@@ +1083,5 @@
> + * Find the first addon that matches the filter function
> + * @param aFilter
> + * Function that takes an addon instance and returns
> + * true if that addon should be selected
> + * @return an array of DBAddonInternals
That comment is wrong (it's not an array for _findAddon).
@@ +1148,5 @@
> * A callback to pass the DBAddonInternal to
> */
> getVisibleAddonForID: function XPIDB_getVisibleAddonForID(aId, aCallback) {
> + let addon = this._findAddon(function visibleID(aAddon) {return ((aAddon.id == aId) && aAddon.visible)});
> + getRepositoryAddon(addon, aCallback);
Hmm, is there a better way to do getRepositoryAddon()? Manually calling it in each of these functions seems error-prone (eg, why's it not being called in getAddonInLocation/getAddonsInLocation?).
@@ +1252,5 @@
> addAddonMetadata: function XPIDB_addAddonMetadata(aAddon, aDescriptor) {
> // If there is no DB yet then forcibly create one
> + // XXX IRVING I don't think this will work as expected because the addonDB
> + // getter will kick in. Might not matter because of the way the new DB
> + // creates itself.
Indeed, that getter will kick in.
@@ +1264,5 @@
> + this.addonDB[newAddon._key] = newAddon;
> + if (newAddon.visible) {
> + // XXX IRVING This seems an odd place to remove visibility of existing addons, but
> + // the old code did it here...
> + this.makeAddonVisible(newAddon);
Odd why? Doing it here guarantees only one add-on with the same ID is visible.
::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt.js
@@ +250,5 @@
> // serves this purpose). On startup the add-ons manager won't rebuild
> // because there is a file there still.
> shutdownManager();
> var dbfile = gProfD.clone();
> + dbfile.append("extensions.json");
EXTENSIONS_DB
::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt_strictcompat.js
@@ +250,5 @@
> // serves this purpose). On startup the add-ons manager won't rebuild
> // because there is a file there still.
> shutdownManager();
> var dbfile = gProfD.clone();
> + dbfile.append("extensions.json");
EXTENSIONS_DB
::: toolkit/mozapps/extensions/test/xpcshell/test_startup.js
@@ +132,5 @@
> check_startup_changes(AddonManager.STARTUP_CHANGE_DISABLED, []);
> check_startup_changes(AddonManager.STARTUP_CHANGE_ENABLED, []);
>
> let file = gProfD.clone();
> + file.append("extensions.json");
EXTENSIONS_DB
@@ +190,5 @@
> check_startup_changes(AddonManager.STARTUP_CHANGE_ENABLED, []);
> do_check_true(gCachePurged);
>
> let file = gProfD.clone();
> + file.append("extensions.json");
EXTENSIONS_DB

Created attachment 759385[details][diff][review]
ASYNC v1 - Save XPI Database JSON asynchronously
This patch depends on all the preceding patches, in order: TEST, JSON, UPGRADE, DeferredSave.
This removes the old beginTransaction/commitTransaction model of controlling updates to the database and replaces it with a clean/dirty/flush model as supported by DeferredSave.jsm.
There were a couple of places where I added additional try/catch or other error handling to handle failures where the old code would have triggered a rollback; for the other cases I convinced myself that either (a) errors were already being handled in the existing code or (b) rollback would leave the data less consistent than not rolling back.

Created attachment 759824[details][diff][review]
DeferredSave v3 - deferred async writes module, review comments incorporated
Aside from addressing review comments, I also added handling and tests for cases where the data callback throws an error.
Question: Should we try to get this landed in a shared place in toolkit, or leave it under toolkit/mozapps/extensions?
(In reply to :Felipe Gomes from comment #40)
> Comment on attachment 758123[details][diff][review]
> DeferredSave WIP v2 - deferred async writes module, now with unit tests
>
> Very very nice! The flush() function is kinda confusing to me with all the
> aCallback/flushCallback dance, but I really don't know how to simplify it
> more
I updated the comments to try and make it clearer.
> ::: toolkit/mozapps/extensions/DeferredSave.jsm
> @@ +68,5 @@
> > +
> > + LOG("Save changes");
> > + if (!this._dirtyData) {
> > + this._dirtyData = true;
> > + if (!this._saveInProgress)
>
> _saveInProgress will always be false here because if it was true we would
> have returned earlier
Good catch.
> @@ +109,5 @@
> > + // end up waiting for an in-progress async save
> > + let jsonData = this._dataCallback();
> > + // reset dirty flag so _writeToDisk/finished() doesn't restart timer
> > + this._dirtyData = false;
> > + flushCallback = result => {
>
> result unused? fwiw you can use "() =>" for no params with these arrow
> functions
I like leaving the parameter there to document that the "correct" signature of that particular callback is a function that takes one parameter. Bit of a holdover from writing type-checked code, I suppose.
> @@ +114,5 @@
> > + if (aCallback) {
> > + LOG("registering delayed save callback...");
> > + this._pendingCallbacks.push(aCallback);
> > + }
> > + // This could be replaced with a synchronous write
>
> interesting idea..
I'm not completely convinced we want to, mostly because we already need to allow async behaviour at that point in case there's already a write in progress.
(In reply to David Rajchenbach Teller [:Yoric] from comment #42)
> Comment on attachment 758123[details][diff][review]
> DeferredSave WIP v2 - deferred async writes module, now with unit tests
...
> ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js
> @@ +27,5 @@
> > +
> > +context.OS.File.writeAtomic = function mock_writeAtomic(aFile, aData, aOptions) {
> > + do_print("writeAtomic: " + aFile + " data: '" + aData + "', " + aOptions.toSource());
> > + if (expectData !== undefined) {
> > + do_check_eq(expectData, aData);
>
> This global value |expectData| looks rather error-prone to me. Couldn't you
> do something simpler, along the lines of:
> - use the regular writeAtomic;
> - read the file from disk to compare with whichever data you have written
> ?
I didn't want to use the real writeAtomic because it would have made the tests much harder to write and manage, since I wouldn't be able to control the order of asynchronous events. I changed from setting |expectData| and checking inside writeAtomic, to having writeAtomic "write" its output to a shared variable and then checking that shared variable in the test - I think that's clearer.
> @@ +39,5 @@
> > +
> > +let callbackData = "Data to save";
> > +function dataCallback() {
> > + return callbackData;
> > +}
>
> callbackData and dataCallback? Sounds like a good recipe for reviewer
> headache.> Also, for the sake of testing, you probably want a different string for each
> test, so you could replace occurrences of |dataCallback| with |(() => "the
> name of the test")|.
I use the same pattern of changing the data returned by the callback during test in many tests, so it would have been a pain to redefine this for each. I renamed the shared variable to "dataToSave" to be less confusing.
I suppose I could make it even more obvious by putting these variables inside a "MockWriteAtomic" object or something like that, so references would look like MockWriteAtomic.dataToSave and MockWriteAtomic.writtenData...

(In reply to :Irving Reid from comment #44)
> Question: Should we try to get this landed in a shared place in toolkit, or
> leave it under toolkit/mozapps/extensions?
From a quick glance, it seems potentially useful more generally (other DeferredTask users may want to use it?), but we don't need to figure that out now necessarily. We can move it to toolkit when we have a user lined up for it (file followup bugs).

Created attachment 760450[details][diff][review]
JSON v5: Save and restore XPI database in JSON format, response to first review
Based on feedback from Yoric on the DeferredSave patch, I've also converted some of the __defineGetter__() calls to Object.defineProperty(); I'll get the rest in a later patch.
(In reply to Blair McBride [:Unfocused] from comment #41)
> Comment on attachment 754908[details][diff][review]
> JSON v4: Save and restore XPI database in JSON format
> ...
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> @@ +5029,5 @@
> > + * JSON.stringify filter to save only the relevant parts of an AddonInternal
>
> Why move this to this function, instead of keeping it in
> AddonInternal.toJSON()? toJSON feels neater, and makes for nice debug
> possibilities (ditto for DBAddonInternal).
That was left over from an earlier version that used JSON.stringify() to extract two different sets of data from the same object, so I needed to control which which fields were serialized. I hadn't undone the change when I abandoned that approach.
> ::: toolkit/mozapps/extensions/XPIProviderUtils.js
> @@ +219,5 @@
> > +function asyncMap(aObjects, aMethod, aCallback) {
>
> This makes me wish we had Mike's Async.jsm in the tree already (see bug
> 870774).
If I need this code in the long run, I'd use Async.jsm - I'd much rather redesign the way we attach the AddonRepository cached objects to AddonWrappers so we don't need it at all.
> @@ +372,5 @@
> > +function AddonFromSave(aLoaded) {
>
> The name of this irks me. createDBAddonInternal or something similar? Or
> maybe even just move it all to DBAddonInternal's constructor.
Excellent idea. I've moved DBAddonInternal to XPIProviderUtils.jsm and put this logic in its constructor. This will cause serious bitrot on the subsequent patches - would you rather I pulled it out into a separate patch file?
> @@ +528,5 @@
> > +
> > + /**
> > + * Open and parse the JSON XPI extensions database.
> > + * @return true: the DB was successfully loaded
> > + * false: The DB either needs upgrade or did not exist at all.
>
> The existing AddonRepository code deletes the DB file when it knows it's
> meant to be empty, which saves I/O - the new JSON DB code has been adapted
> to do the same. XPIDatabase has never done that, but I think it'd be worth
> looking into.
I'd like to leave that for a follow up bug if we decide we want it.
> @@ +566,5 @@
> > + let addonDB = {}
>
> Over in bug 853389, AddonRepository is storing the add-ons in-memory in a
> Map, which is specifically designed for this use-case. On disk, it's storing
> them in an array, which will save some bytes (and we don't benefit from
> being able to index by ID for the on-disk format anyway). Think we should do
> the same here.
I change this to a Map in the UPGRADE patch.
> @@ +1013,5 @@
> > // If we are running with an in-memory database then force a new
> > // extensions.ini to be written to disk on the next startup
> > + // XXX IRVING special case for if we fail to save extensions.json?
> > + // XXX maybe doesn't need to be at shutdown?
> > + // if (!this.connection.databaseFile)
>
> Yea, probably safest to still keep this - at least until after all this work
> is done, and we discover its safe to not have it.
>
> But if its there, it does need to be done on shutdown - there's a few places
> where we set this to false during runtime. Alternatively, we could use a
> separate pref just for this situation (probably more reliable).
I re-implement this logic in the UPGRADE patch.
> @@ +1028,4 @@
> > });
> > + // XXX IRVING removed an async callback when the database was closed
> > + // XXX do we want to keep the ability to async flush extensions.json
> > + // XXX and then call back?
>
> Yes - useful for testing.
Re-implemented in the ASYNC patch.
> @@ +1051,5 @@
> > + let locations = {};
> > + for each (let addon in this.addonDB) {
> > + locations[addon.location] = 1;
>
> You're using locations as a set, better off just using a Set. Can easily
> convert a Set to an array via: [...mySet]
Changed to a Set in the UPGRADE patch.
> @@ +1083,5 @@
> > + * Find the first addon that matches the filter function
> > + * @param aFilter
> > + * Function that takes an addon instance and returns
> > + * true if that addon should be selected
> > + * @return an array of DBAddonInternals
>
> That comment is wrong (it's not an array for _findAddon).
Fixed.
> @@ +1148,5 @@
> > * A callback to pass the DBAddonInternal to
> > */
> > getVisibleAddonForID: function XPIDB_getVisibleAddonForID(aId, aCallback) {
> > + let addon = this._findAddon(function visibleID(aAddon) {return ((aAddon.id == aId) && aAddon.visible)});
> > + getRepositoryAddon(addon, aCallback);
>
> Hmm, is there a better way to do getRepositoryAddon()? Manually calling it
> in each of these functions seems error-prone (eg, why's it not being called
> in getAddonInLocation/getAddonsInLocation?).
I would love to redesign this, but I want to wait until my changes here and Felipe's in AddonRepository are done before I look at anything that cuts across both. I've fixed getAddonInLocation() to call getRepositoryAddon(). getAddonsInLocation() is only used internally, in contexts where the repository addon info isn't needed.
> @@ +1252,5 @@
> > addAddonMetadata: function XPIDB_addAddonMetadata(aAddon, aDescriptor) {
> > // If there is no DB yet then forcibly create one
> > + // XXX IRVING I don't think this will work as expected because the addonDB
> > + // getter will kick in. Might not matter because of the way the new DB
> > + // creates itself.
>
> Indeed, that getter will kick in.
I'll fix this in a separate patch that does async load of the DB when we're loading on demand.
> @@ +1264,5 @@
> > + this.addonDB[newAddon._key] = newAddon;
> > + if (newAddon.visible) {
> > + // XXX IRVING This seems an odd place to remove visibility of existing addons, but
> > + // the old code did it here...
> > + this.makeAddonVisible(newAddon);
>
> Odd why? Doing it here guarantees only one add-on with the same ID is
> visible.
I don't think it's so odd any more; comment removed.
> ::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt.js
> @@ +250,5 @@
> > // serves this purpose). On startup the add-ons manager won't rebuild
> > // because there is a file there still.
> > shutdownManager();
> > var dbfile = gProfD.clone();
> > + dbfile.append("extensions.json");
>
> EXTENSIONS_DB
These are all cleaned up in a later patch.

Comment on attachment 759824[details][diff][review]
DeferredSave v3 - deferred async writes module, review comments incorporated
Review of attachment 759824[details][diff][review]:
-----------------------------------------------------------------
I'm still not enthusiastic about adding callback-based code while we're migrating plenty of code to promises.
What's the rationale, again?
::: toolkit/mozapps/extensions/DeferredSave.jsm
@@ +34,5 @@
> + *
> + * @constructor
> + * @param aFile
> + * nsIFile representing the full path of the file where the data
> + * is to be written.
Please don't use a nsIFile to represent a path. We want to get rid of nsIFile, so just use a string.
@@ +40,5 @@
> + * Callback function that takes no argument and returns the data to
> + * be written. If aCallback returns a ArrayBufferView, the contained
> + * bytes will be written to the file as is.
> + * If the aCallback returns a String the data will be UTF-8 encoded
> + * and then written to the file.
Nit: I'm not a big fan of "aDataCallback" as a name. I don't have any better name at hand, but if you find a good idea, don't hesitate to use it :)
@@ +72,5 @@
> +}
> +
> +DeferredSave.prototype = {
> + // Data is dirty and needs to be flushed
> + _dirtyData: false,
That's mutable, please don't put it in the prototype.
@@ +102,5 @@
> + this._dirtyData = true;
> + this._deferredWriter.start();
> + }
> + if (aCallback) {
> + this._pendingCallbacks.push(aCallback);
So, if data is not dirty, the callback might never be called? That looks weird.
@@ +121,5 @@
> + * 4) Data is being written and in-memory copy is dirty
> + *
> + * @param aCallback a callback to be called when the data
> + * has finished being flushed (it will still
> + * be called if there was no dirty data).
Nit: Whitespace.
@@ +157,5 @@
> + }
> + catch(e) {
> + // Something broke, make our final flush call back with the error
> + // instead of saving the data. Doing it this way waits for an
> + // in-progress write to complete before calling back with the error
Not sure why you put this try/call. The only code that seems to be able to raise an error is dataCallback, why not protect just that line?
@@ +187,5 @@
> + }
> + catch(e) {
> + ERROR("Deferred save failed", e);
> + this._writeFinished(e);
> + }
What's the point of this try/catch?
I believe that |this._writeToDisk| can fail only in case of type error, which is certainly not what you want to defend against.
@@ +189,5 @@
> + ERROR("Deferred save failed", e);
> + this._writeFinished(e);
> + }
> + },
> +
Nit: Whitespace.
::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js
@@ +41,5 @@
> +let dataToSave = "Data to save";
> +function dataCallback() {
> + return dataToSave;
> +}
> +
It seems to me that you want a subconstructor of DeferredSave (let's call it DeferredSaveTester) with the following additional features:
- both |saveChanges| and |flush| provide the content that has been written (either by returning a promise or by calling a callback);
- you can observe whenever a flush takes place.
Could you please replace/encapsulate all these global variables stuff as this single constructor DeferredSaveTester? This should make your tests easier to read.
@@ +55,5 @@
> + saver.saveChanges(error => {
> + if (error)
> + do_report_unexpected_exception(error);
> + do_check_eq(data, writtenData);
> + run_next_test();
Ideally, this could be rewritten as
add_task(function test_basic_save_succeeds() {
let data = "Test 1 Data";
let saver = new DeferredSaveTester(testFile, () => data, 1);
let writtenData = yield saver.saveChanges();
do_check_eq(data, writtenData);
});
Even if you don't want to use promise, you should be able to get something easier on the eye by using DeferredSaveTester.
@@ +61,5 @@
> +});
> +
> +// Two saves called during the same event loop, both with callbacks
> +// Make sure we save only the second version of the data
> +add_test(function test_two_saves() {
Well, this could look like
add_task(function test_two_saves() {
let dataToSave = "Test first save";
let secondData = "Test second save";
let saver = new DeferredSaveTester(testFile, () => dataToSave, 1);
let promiseWrittenData1 = saver.saveChanges();
dataToSave = secondData;
let promiseWrittenData2 = saver.saveChanges();
do_check_eq((yield promiseWritenData1), secondData);
do_check_eq((yield promiseWritenData2), secondData);
});
Not sure what you want to test with |firstCallback|. That your callbacks are only called once? That they are called in some specific order?

Created attachment 762461[details][diff][review]
DeferredSave v4 - Promise version
(In reply to David Rajchenbach Teller [:Yoric] from comment #48)
> Comment on attachment 759824[details][diff][review]
> DeferredSave v3 - deferred async writes module, review comments incorporated
>
> I'm still not enthusiastic about adding callback-based code while we're
> migrating plenty of code to promises.
> What's the rationale, again?
Mostly that I didn't feel comfortable enough with Promises to dig in and try. I got over that; here's a version that uses Promises (and doesn't use DeferredTask, because it was just as easy to implement the timer myself).
> > + * @param aFile
> > + * nsIFile representing the full path of the file where the data
> > + * is to be written.
>
> Please don't use a nsIFile to represent a path. We want to get rid of
> nsIFile, so just use a string.
Done.
> Nit: I'm not a big fan of "aDataCallback" as a name. I don't have any better
> name at hand, but if you find a good idea, don't hesitate to use it :)
How about "aDataProvider"?
> > +DeferredSave.prototype = {
> > + // Data is dirty and needs to be flushed
> > + _dirtyData: false,
>
> That's mutable, please don't put it in the prototype.
Moot with the Promise-based rewrite, but thanks; I'll remember that for the future.
> @@ +102,5 @@
> > + this._dirtyData = true;
> > + this._deferredWriter.start();
> > + }
> > + if (aCallback) {
> > + this._pendingCallbacks.push(aCallback);
>
> So, if data is not dirty, the callback might never be called? That looks
> weird.
Also moot with the rewrite, but there's no else; if there's a callback it gets added to the chain whether or not the data was dirty (the tests check this)
> Not sure why you put this try/call. The only code that seems to be able to
> raise an error is dataCallback, why not protect just that line?
I reduced the scope of the try/catch in the Promise version, but it still covers more than just the dataProvider; if something can cause some other part of this code to blow up I'd rather the error was passed to someone who can do something useful with it.
> ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js
> @@ +41,5 @@
> > +let dataToSave = "Data to save";
> > +function dataCallback() {
> > + return dataToSave;
> > +}
> > +
>
> It seems to me that you want a subconstructor of DeferredSave (let's call it
> DeferredSaveTester) with the following additional features:
> - both |saveChanges| and |flush| provide the content that has been written
> (either by returning a promise or by calling a callback);
> - you can observe whenever a flush takes place.
>
> Could you please replace/encapsulate all these global variables stuff as
> this single constructor DeferredSaveTester? This should make your tests
> easier to read.
I encapsulated the test harness quite a bit; not in exactly the way you describe, because I needed to be able to switch back and forth between waiting on outputs of the code under test, and prodding the underside of the mock. That said, between the Task/Promise approach to the new tests and the encapsulated harness I think the tests are much clearer.
> Not sure what you want to test with |firstCallback|. That your callbacks are
> only called once? That they are called in some specific order?
Yes, I'm making sure that each callback is called exactly once and in the correct order.
Thanks for pushing me to try a Task/Promise approach to this module. I think the code turned out well, and it was time well spent learning how to use the libraries.

(In reply to :Irving Reid from comment #51)
> I need to update the UPGRADE patch, because the first round of comment fixes
> bitrotted it terribly; I'll include these changes there.
Yea, makes sense. And, conveniently, will mean less reviewing for me :)
> Any ETA on the rest of the review stack?
Today, hopefully. Sorry its taken so long - these patches are getting hard to review, and I'm having to look at them all at once to get a full overview. Would be safest to wait to land anything until after the next merge on the 24th, so I kinda wonder if a temporary project branch would be useful - it'd make reviewing subsequent patches so much easier.

Comment on attachment 754912[details][diff][review]
UPGRADE V3 - Upgrade sqlite and RDF databases to JSON
Review of attachment 754912[details][diff][review]:
-----------------------------------------------------------------
r- mostly just because its so badly bitrotten, and I'd like to be able to apply it to see everything.
(In reply to :Irving Reid from comment #46)
> I've moved DBAddonInternal to XPIProviderUtils.jsm and put
> this logic in its constructor. This will cause serious bitrot on the
> subsequent patches - would you rather I pulled it out into a separate patch
> file?
Since the JSON patch has a r+ now, with just minor fixups left, it'd be easier for me if you could unbitrot the other patches, and include any fixups there (or in separate patches).
::: toolkit/mozapps/extensions/XPIProviderUtils.js
@@ +385,5 @@
> converter.close();
> + this.dbfileExists = true;
> + // XXX probably only want to do this if the version is different
> + Services.prefs.setIntPref(PREF_DB_SCHEMA, DB_SCHEMA);
> + Services.prefs.savePrefFile(null); // XXX is this bad sync I/O?
Yes :( It's a trade-off between the sync IO by flushing prefs now, and the cost of rebuilding the DB next startup if we don't shutdown cleanly before the prefs are written to disk. That said, its arguably more important to ensure the prefs are written to disk when setting PREF_PENDING_OPERATIONS to true, and we don't flush then.
@@ +440,5 @@
> },
>
> /**
> + * XXX IRVING new json file doesn't have this retry logic for
> + * problems opening the file. Do we want it?
No, it's not helpful with JSON. The old SQLite retry code was for the case where the DB was corrupt - you're already handling that case.
@@ +447,2 @@
> * existing file and create an empty database. If that fails then it will
> * open an in-memory database that can be used during this session.
Should update this comment.
@@ +478,5 @@
> + this.migrateData = null;
> + return true;
> + }
> + LOG("Migrating data from sqlite");
> + this.migrateData = this.getMigrateDataFromDatabase(connection);
loadSqliteData and getMigrateDataFromDatabase should just be merged, since there's no obvious distinction between the two.
@@ +527,2 @@
> }
> + cstream.close();
The |.close()| call should be inside a |finally {}| block.
@@ +580,5 @@
> else {
> + WARN("Extensions database " + this.jsonFile.path +
> + " exists but is not readable; rebuilding in memory");
> + // XXX open question - if we can overwrite at save time, should we, or should we
> + // leave the locked database in case we can recover from it next time we start up?
If its locked from reading, you won't be able to write to it anyway, right? And we do want to try as hard as possible to recover the info from the existing DB.
@@ +593,2 @@
>
> + // XXX what about aForceOpen? Appears to handle the case of "don't open DB file if there aren't any extensions"?
Not exactly. Or at least, not with the same behaviour as AddonRepository has. AddonRepository will delete an existing database file if it's empty; aForceOpen here only defers the creation of the database file if there isn't one already.
I thought I saw some comment about the default theme, in relation to this...now I can't find it. Anyway: Not every app has a default theme (eg, Firefox on Android).
@@ +891,2 @@
>
> + for (let [key, addon] of this.addonDB) {
If you're not using |key|, you can save creating the variable by:
for (let [, addon] of this.addonDB) {
::: toolkit/mozapps/extensions/test/xpcshell/test_corrupt.js
@@ +250,5 @@
> // serves this purpose). On startup the add-ons manager won't rebuild
> // because there is a file there still.
> shutdownManager();
> + gExtensionsJSON.remove(true);
> + gExtensionsJSON.create(AM_Ci.nsIFile.DIRECTORY_TYPE, 0755);
Since you're updating this anyway... octal notation is going away, so could you update that to |parseInt("755", 8)|? (I like that way because it lets us keep the familiar octal notation)
Ditto with test_corrupt_strictcompat.js

Created attachment 763764[details][diff][review]
UPGRADE V4 - Upgrade sqlite and RDF databases to JSON, incorporating reviews
This patch fixes up bitrot and incorporates all the cleanups from https://bugzilla.mozilla.org/show_bug.cgi?id=853388#c49, and also:
(In reply to Blair McBride [:Unfocused] from comment #53)
> Review of attachment 754912[details][diff][review]:> > + Services.prefs.setIntPref(PREF_DB_SCHEMA, DB_SCHEMA);
> > + Services.prefs.savePrefFile(null); // XXX is this bad sync I/O?
>
> Yes :( It's a trade-off between the sync IO by flushing prefs now, and the
> cost of rebuilding the DB next startup if we don't shutdown cleanly before
> the prefs are written to disk. That said, its arguably more important to
> ensure the prefs are written to disk when setting PREF_PENDING_OPERATIONS to
> true, and we don't flush then.
This gets rebuilt in the ASYNC patch - I set the pref but don't flush; the cost of being wrong is relatively low.
> @@ +478,5 @@
> > + this.migrateData = null;
> > + return true;
> > + }
> > + LOG("Migrating data from sqlite");
> > + this.migrateData = this.getMigrateDataFromDatabase(connection);
>
> loadSqliteData and getMigrateDataFromDatabase should just be merged, since
> there's no obvious distinction between the two.
Fixed in the ASYNC patch
> @@ +527,2 @@
> > }
> > + cstream.close();
>
> The |.close()| call should be inside a |finally {}| block.
Good point, I put in a finally for fstream too, in case it doesn't get closed by cstream.
> @@ +580,5 @@
> > else {
> > + WARN("Extensions database " + this.jsonFile.path +
> > + " exists but is not readable; rebuilding in memory");
> > + // XXX open question - if we can overwrite at save time, should we, or should we
> > + // leave the locked database in case we can recover from it next time we start up?
>
> If its locked from reading, you won't be able to write to it anyway, right?
> And we do want to try as hard as possible to recover the info from the
> existing DB.
Well, the file could be temporarily locked by a stupid antivirus scanner just as we're starting up, and unlocked later. My main concern here is that unlike the sqlite version, if the file somehow gets bad permissions we'll never remove it; that leaves the user stuck never being able to save their XPI state changes.
If we tried to write the file even when the read fails, the OS.File.asyncSave write-temp-then-swap should succeed as long as we have permission to delete the existing (bad) file.
> @@ +593,2 @@
> >
> > + // XXX what about aForceOpen? Appears to handle the case of "don't open DB file if there aren't any extensions"?
>
> Not exactly. Or at least, not with the same behaviour as AddonRepository
> has. AddonRepository will delete an existing database file if it's empty;
> aForceOpen here only defers the creation of the database file if there isn't
> one already.
>
> I thought I saw some comment about the default theme, in relation to
> this...now I can't find it. Anyway: Not every app has a default theme (eg,
> Firefox on Android).
OK. I need to work through this code as part of the (to be written) async load patch, so I'll clean it up then.
> > + gExtensionsJSON.create(AM_Ci.nsIFile.DIRECTORY_TYPE, 0755);
>
> Since you're updating this anyway... octal notation is going away, so could
> you update that to |parseInt("755", 8)|? (I like that way because it lets us
> keep the familiar octal notation)
>
> Ditto with test_corrupt_strictcompat.js
Does FileUtils.PERMS_DIRECTORY work for you?

Created attachment 763958[details][diff][review]
asyncSave v2 - Save XPI Database JSON asynchronously, updated after review
(In reply to Blair McBride [:Unfocused] from comment #54)
> Comment on attachment 759385[details][diff][review]
> ASYNC v1 - Save XPI Database JSON asynchronously
>
> Review of attachment 759385[details][diff][review]:
> -----------------------------------------------------------------
Review comments not included here have been directly addressed in the patch.
> Gonna need re-review if DeferredSave moves to Promises.
I did change over to the Promise version; it's not difficult to go back if we decide to.
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> > + * NOTE: this may modify aNewAddon in place; callers should save the database if
> > + * necessary
>
> This seems error prone, and one of the reasons I've like to eventually have
> smart setters on DBAddonInternal that kicked off the timer to flush to disk.
I've been thinking about that too, partly in terms of the data structure forward/backward compatibility support. Maybe in a follow-up patch...
> > - updateAllAddonDisabledStates: function XPI_updateAllAddonDisabledStates() {
>
> Heh... should really be keeping this one instead, as it has the more correct
> method name compared to updateAddonAppDisabledStates (despite what the
> comment says).
Unfortunately, the one with the slightly misleading name was the one in the public API with external users...
> ::: toolkit/mozapps/extensions/XPIProviderUtils.js
> > + // same for the deferred save
> > + delete this._deferredSave;
>
> What's the benefit of doing this?
Just trying to make sure the test environment starts from scratch as much as possible. Given that we try to completely unload the XPIProvider module when we shut down AddonManager, it might not be necessary.
> ::: toolkit/mozapps/extensions/test/xpcshell/test_bug659772.js
> @@ +166,5 @@
> > // Should stay enabled because we migrate the compat info from
> > // the previous version of the DB
> > do_check_neq(a3, null);
> > do_check_eq(a3.version, "2.0");
> > + todo_check_false(a3.appDisabled); // XXX unresolved issue
>
> Were this issues introduced by this patch?
This is the issue discussed in comment 31 through comment 33 - the test started failing after the UPGRADE patch, I just didn't change the check until now. I could use some higher bandwidth help with this, either IRC or vidyo.

(In reply to :Irving Reid from comment #55)
> This gets rebuilt in the ASYNC patch - I set the pref but don't flush; the
> cost of being wrong is relatively low.
Eh? Rebuilding the database is basically the most expensive thing we can do.
> Well, the file could be temporarily locked by a stupid antivirus scanner
> just as we're starting up, and unlocked later. My main concern here is that
> unlike the sqlite version, if the file somehow gets bad permissions we'll
> never remove it; that leaves the user stuck never being able to save their
> XPI state changes.
>
> If we tried to write the file even when the read fails, the
> OS.File.asyncSave write-temp-then-swap should succeed as long as we have
> permission to delete the existing (bad) file.
Ah, yes. The sounds reasonable.
> Does FileUtils.PERMS_DIRECTORY work for you?
Yes - even better :)

(In reply to Blair McBride [:Unfocused] from comment #57)
> (In reply to :Irving Reid from comment #55)
> > This gets rebuilt in the ASYNC patch - I set the pref but don't flush; the
> > cost of being wrong is relatively low.
>
> Eh? Rebuilding the database is basically the most expensive thing we can do.
Would have been more correct for me to say the *risk* of being wrong is very low. The failure window is very narrow - we're only wrong if we're restarting, need to upgrade the DB schema, and FF crashes between when our first XPI DB save succeeds and the next time prefs are flushed. That said, this is also rare enough that the delay from a sync flush of prefs here won't bother users much, so I'm happy to put it back in if you want (savePrefFile() only does i/o if the prefs are dirty).
Sure would be nice to have something like DeferredSave for prefs...
> > Well, the file could be temporarily locked by a stupid antivirus scanner
> > just as we're starting up, and unlocked later. My main concern here is that
> > unlike the sqlite version, if the file somehow gets bad permissions we'll
> > never remove it; that leaves the user stuck never being able to save their
> > XPI state changes.
> >
> > If we tried to write the file even when the read fails, the
> > OS.File.asyncSave write-temp-then-swap should succeed as long as we have
> > permission to delete the existing (bad) file.
>
> Ah, yes. The sounds reasonable.
I'll do a separate patch and test for this. My rough design is that if a save fails, we set something like _lastSaveFailed and go on; future changes to the data will trigger another attempt to save and if that succeeds we clear the flag. At shutdown time, if _lastSaveFailed we make one more attempt to write. If that attempt fails we set PREF_PENDING_OPERATIONS so that we try to clean up on next restart, or we could use a separate preference instead of overloading PREF_PENDING_OPERATIONS.

Created attachment 764935[details][diff][review]
schemaVersion v1: Only upgrade from sqlite when the schema version preference says to
This might be overkill, but the new code (before this patch) would upgrade sqlite => JSON any time the JSON file was missing, whether or not the upgrade had been done before. This patch uses the "extensions.databaseSchema" preference to only do the upgrade once. If the JSON file is lost after a successful upgrade, we rebuild from scratch rather than migrating possibly stale data from the old sqlite DB.

Created attachment 764943[details][diff][review]
asyncLoad v1: Load the XPI JSON database asynchronously when possible
This patch adds async loading of the JSON database if it isn't force loaded at startup.
There are several places where synchronous APIs can force a DB load after startup; I'll add telemetry in a later patch to track whether these cases happen in the field.
The whole stack should apply cleanly now, in the order:
TESTS
JSON
UPGRADE
DeferredSave v4
asyncSave
schemaVersion
asyncLoad
At this point I'd be comfortable landing reviewed versions of this stack; the main remaining changes are (1) adding telemetry probes and (2) handle permission denied and similar errors, see comment #58

Comment on attachment 762461[details][diff][review]
DeferredSave v4 - Promise version
Review of attachment 762461[details][diff][review]:
-----------------------------------------------------------------
r=me
I have suggested a few changes in my review pass over DeferredSave.jsm, could you consider them before landing this patch?
::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js
@@ +145,5 @@
> + );
> +});
> +
> +// Save one set of changes, then while the write is in progress, modify the
> +// data two more times
I don't quite understand what you're checking in that test.

Created attachment 765545[details][diff][review]
DeferredSave v5 - Promise version with review updates
(In reply to David Rajchenbach Teller [:Yoric] from comment #59)
> Comment on attachment 762461[details][diff][review]
> DeferredSave v4 - Promise version> @@ +61,5 @@
> > + if (aDelay && (aDelay > 0))
>
> You could also declare a default value for |aDelay| in the function header.
> Your call.
This is more robust in the face of odd values of aDelay
> @@ +77,5 @@
> > + * Mark the current stored data dirty, and schedule a flush to disk
> > + * @return A Promise<integer> that will be resolved after the data is written to disk;
> > + * the promise is resolved with the number of bytes written.
> > + * XXX handle case where saveChanges() is called in the middle of a flush? Should just
> > + * become dirty/inProgress again, yes?
>
> I believe that either choice is good. Note that "in the middle of a flush"
> probably actually means "between the instant the data is sent to OS.File and
> the instant it is actually written to disk."
The real question here is what the meaning of "flush()" is. Does flush() indicate that we're closing up, so future operations should be forbidden, or does it just mean "let me know when you've finished writing everything that needs to be written as of now", but future writes are still OK.
> The following formulation would avoid having to pre-define rewrite, which I
> believe would be a little nicer. Your call.
>
> this._inProgress.promise.then(
> () => {} /*ignore any error*/
> ).then(
> () => /*here, add the content of the rewrite*/
> )
I did this.
> or, with Task.jsm
>
> try {
> yield this._inProgress.promise;
> } catch (ex) {
> /* ignore errors */
> }
> // Here, the contents of |rewrite|
I tried rewriting this as a Task, but it didn't seem clearer to me so I left it as raw promises.
> @@ +189,5 @@
> > + }
> > + catch(e) {
> > + // tricky - signal saveChanges/rewrite() to reject with this error
> > + // after _inProgress is done
> > + this._flushingError = e;
>
> Why don't you immediately reject |this._pending|?
To maintain the order of promises resolving, for example:
saveChanges().then(handler_1)
... writing starts
flush().then(handler_2)
... dataProvider fails
I want handler_1 to get the success/fail from the in progress write before handler_2 gets the dataProvider failure.
> @@ +212,5 @@
> > + // Nothing to write, return a pre-resolved promise
> return Promise.resolve(0);
I looked for a shortcut API but didn't see that. Thanks.
(In reply to David Rajchenbach Teller [:Yoric] from comment #63)
> ::: toolkit/mozapps/extensions/test/xpcshell/test_DeferredSave.js
> @@ +145,5 @@
> > +
> > +// Save one set of changes, then while the write is in progress, modify the
> > +// data two more times
>
> I don't quite understand what you're checking in that test.
I clarified the comment a bit - if there are multiple changes while a write is in progress, we should only write one more time with the final state after all the changes.

Created attachment 765566[details][diff][review]
retrySave v1 - If the JSON file is not readable at startup, try to save the new DB anyway
To be applied on top of asyncSave.
This roughly parallels the sqlite implementation, where we would always try to remove the existing DB if there was a problem reading it - the old implementation would only fail hard if the new DB couldn't be removed; in that case it would keep a temporary in-memory DB and try to go back to the old DB on next restart.
The new code just keeps trying to do OS.File.writeAtomic(); this will normally get rid of an extensions.json file with bad permissions, because moving the newly written file into place can unlink the old one even though it's not writable.
The test changes are a bit hacky; it's not easy to simulate conditions where we can't update the json file at all but the rest of the browser still works. The alternative might be to do a Windows-specific test where we hold the existing file open to prevent removal.
This patch isn't critical for first Nightly landing, but I would prefer that it follow not too far behind.

Irving, I was just asked to take QA ownership of this. Would you briefly explain what user facing impact this change will have. Without reading through every comment here, I get the feeling that the Addons Manager should just work as usual. Is that correct or are there new features included that QA black box testing would need to address? Thanks in advance.

Tracy, this change (and the companion change in bug 853389) are intended to have no user facing impact - the changes are entirely to the underlying implementation. What we're looking for is testing to ensure that we haven't broken any of the existing features.

Created attachment 768546[details][diff][review]
DeferredSave v6 - Fix a race condition, add activity counters for telemetry
I realized I had some other changes to DeferredSave in my telemetry patch, so I figured I'd roll them together for a quick re-review.

Created attachment 769673[details][diff][review]
DeferredSave v7 - Restructure after design discussions
After talking this through with Yoric at the work week, I found a way to make the logic much simpler by taking advantage of already-resolved promises to factor out the "do we have an existing write in progress" logic.

Created attachment 770975[details][diff][review]
Telemetry v1: Add some simple measures for XPI Database load/saves
One thing this doesn't record is how long the writes actually take; OS.File.writeAtomic can provide that, so we could add a histogram.
Blair, feel free to hand off the review on this one if you want.

Comment on attachment 769673[details][diff][review]
DeferredSave v7 - Restructure after design discussions
Review of attachment 769673[details][diff][review]:
-----------------------------------------------------------------
Looks like this was bitrotten a little by bug 880245.
::: toolkit/mozapps/extensions/DeferredSave.jsm
@@ +104,5 @@
> + if (!this._pending) {
> + return;
> + }
> +
> + LOG("Starting timer");
Should include the filename in these log messages - with multiple DeferredSave instances (XIProvider, AddonRepository, eventually others) there's no way to currently differentiate the logs, which is going to be painful for debugging.

Comment on attachment 770975[details][diff][review]
Telemetry v1: Add some simple measures for XPI Database load/saves
- I don't think we're consistently reporting the update reason when there are multiple reasons to update: sometimes we only report the latest reason (line 3174) and sometimes we report an earlier reason (line 3170). We should be consistent about this.. either only report the earliest/latest reason or report all reasons in a simpleMeasures array (my preference)
- This is mostly a personal preference: let's not do any heavy lifting in if-conditions. So let's move the calls to processPendingFileChanges, installDistributionAddons, and others inside or outside the if-blocks as appropriate.
- Rename updateDatabase to updateReason or something similar
- I think the coding style now requires curlies around all if-else blocks
- Style request/nit: Move the "update reason 8" logic outside the call to recordSimpleMeasure and make it explicit:
if (!updateDatabase && hasPendingChanges) {
updateDatabase = 8;
}
- Why did you remove the comment at the beginning of XPIDB_openConnection?
- Does the call to Ci.nsIAppStartup trigger initialization of the service? If so, how expensive is it?

Comment on attachment 763958[details][diff][review]
asyncSave v2 - Save XPI Database JSON asynchronously, updated after review
Review of attachment 763958[details][diff][review]:
-----------------------------------------------------------------
(In reply to :Irving Reid from comment #56)
> Unfortunately, the one with the slightly misleading name was the one in the
> public API with external users...
*facepalm*
> This is the issue discussed in comment 31 through comment 33 - the test
> started failing after the UPGRADE patch, I just didn't change the check
> until now. I could use some higher bandwidth help with this, either IRC or
> vidyo.
Oops, sorry, missed this. I'm gonna need to set aside some quiet time to think that one through.
::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +3221,2 @@
> try {
> + XPIDatabase.openConnection(false, true);
openConnection doesn't throw, right? Either way, should keep that in its original position outside this try/catch - because if it can throw, nothing else in that if block should run. (But processFileChanges is ok to throw, the rest of the code can keep running fine if that happens.)

Comment on attachment 764935[details][diff][review]
schemaVersion v1: Only upgrade from sqlite when the schema version preference says to
Review of attachment 764935[details][diff][review]:
-----------------------------------------------------------------
Not overkill - I much prefer this.
::: toolkit/mozapps/extensions/test/xpcshell/test_migrate_max_version.js
@@ +88,5 @@
> +
> + AddonManager.getAddonByID("addon1@tests.mozilla.org",
> + function check_after_rebuild(a1) {
> + // addon1 was rebuilt from extensions directory,
> + // so it appears enabled as a foreign install
foreignInstall is just a guess - hopefully we'll be better at guessing in the future, so we shouldn't rely on that here. I'd put a different version string in the DB, and differentiate based on that.

Created attachment 774164[details][diff][review]
DeferredSave v8 - Tag log messages with the file name
At Blair's suggestion, I've reworked the logging so that each instance of DeferredSave has its own instances of LOG/WARN/ERROR that know the leaf name of the file being saved, so messages look like:
WARN: DeferredSave/extensions.json: blah blah blah

Comment on attachment 765573[details][diff][review]
retrySave v2 - If the JSON file is not readable at startup, try to save the new DB anyway
Review of attachment 765573[details][diff][review]:
-----------------------------------------------------------------
::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ -3188,5 @@
> // If the database doesn't exist and there are add-ons installed then we
> // must update the database however if there are no add-ons then there is
> // no need to update the database.
> - // Avoid using XPIDatabase.dbFileExists, as that code is lazy-loaded,
> - // and we want to avoid loading it until absolutely necessary.
Why remove this?
::: toolkit/mozapps/extensions/XPIProviderUtils.js
@@ -579,5 @@
> if (inputAddons.schemaVersion != DB_SCHEMA) {
> // Handle mismatched JSON schema version. For now, we assume
> // compatibility for JSON data, though we throw away any fields we
> // don't know about
> - // XXX preserve unknown fields during save/restore
Still planning on doing this, right?
@@ +927,5 @@
> })
> .then(count => {
> + // If our last attempt to read or write the DB failed, force a new
> + // extensions.ini to be written to disk on the next startup
> + let result = this.lastError;
Nit: Don't really need this extra variable.

Hm, so, thinking about atomicWrite... if we crash while that's writing, we won't get a corrupt DB. But we will be left with an out of date DB, and a temp file. I wonder if we should detect that temp file on startup and force a DB rebuild.

Comment on attachment 770975[details][diff][review]
Telemetry v1: Add some simple measures for XPI Database load/saves
Review of attachment 770975[details][diff][review]:
-----------------------------------------------------------------
+1 to everything Vladan said in comment 76.
Would be good to have a measurement that could help inform us if the value of ASYNC_SAVE_DELAY_MS is appropriate.
::: toolkit/mozapps/extensions/XPIProvider.jsm
@@ +3142,5 @@
>
> // If the application version has changed then the database information
> + // needs to be updated. Set this to an integer, 0 means no update needed,
> + // positive integers represent various reasons.
> + let updateDatabase = aAppChanged ? 1 : 0;
Magic numbers are the devil. These should all be documented named constants at the top of the file.
::: toolkit/mozapps/extensions/XPIProviderUtils.js
@@ +570,5 @@
> // Now do some sanity checks on our JSON db
> if (!("schemaVersion" in inputAddons) || !("addons" in inputAddons)) {
> // Content of JSON file is bad, need to rebuild from scratch
> ERROR("bad JSON file contents");
> + AddonManagerPrivate.recordSimpleMeasure("XPIDB_invalid", 1);
Can you name measures like this XPIDB_error_XXX ? Makes it easier to filter on this type of measure, versus non-error measurements.
Though, saying that, most (all?) of these error measures are reasons to rebuild the database, so maybe it should be an array of reasons in one single measure, like the measure added in XPIProvider.
@@ +579,5 @@
> // Handle mismatched JSON schema version. For now, we assume
> // compatibility for JSON data, though we throw away any fields we
> // don't know about
> + AddonManagerPrivate.recordSimpleMeasure("XPIDB_schemaMismatch",
> + inputAddons.schemaVersion);
Is there any reasonable way to correlate this with XPIDB_startup_load=3 ?
@@ +656,5 @@
> *
> * @return Promise<Map> resolves to the Map of loaded JSON data stored
> * in this.addonDB; never rejects.
> */
> + getDatabase: function XPIDB_asyncOpen() {
Would be good to get measurements for how long the main-thread synchronous parts of this take (TextDecoder, parseDB, upgradeDB, and unreadableDB).
@@ +664,5 @@
> }
>
> LOG("Starting async load of XPI database " + this.jsonFile.path);
> + // Track in telemetry: did the async load start while XPIProvider
> + // was initializing? before first paint?
Would be nice to have something similar for the sync version too, so we can tell how often that happens outside of startup.
@@ +665,5 @@
>
> LOG("Starting async load of XPI database " + this.jsonFile.path);
> + // Track in telemetry: did the async load start while XPIProvider
> + // was initializing? before first paint?
> + let loadTime = 3;
Death to magic numbers!
@@ +675,5 @@
> + if (!startupInfo.firstPaint) {
> + loadTime = 2;
> + }
> + }
> + AddonManagerPrivate.recordSimpleMeasure("XPIDB_async_load", loadTime);
Nit: Could do with some white space around this block of telemetry stuff.
@@ +952,5 @@
> this.initialized = false;
>
> + if (this._deferredSave) {
> + AddonManagerPrivate.recordSimpleMeasure(
> + "XPIDB_total_saves", this._deferredSave.totalSaves);
Naming these XPIDB_saves_XXX makes it easier to filter on.
@@ +963,5 @@
> // Make sure any pending writes of the DB are complete, and we
> // finish cleaning up, and then call back
> this.flush()
> .then(null, error => {
> ERROR("Flush of XPI database failed", error);
Add a measurement for this?
@@ +1143,5 @@
> // jank-tastic! Must synchronously load DB if the theme switches from
> // an XPI theme to a lightweight theme before the DB has loaded,
> // because we're called from sync XPIProvider.addonChanged
> WARN("Synchronous load of XPI database due to getAddonsByType(" + aType + ")");
> + AddonManagerPrivate.recordSimpleMeasure("XPIDB_late_sync_open", 1);
Ditto with the magic with these.

(In reply to Blair McBride [:Unfocused] from comment #83)
> Hm, so, thinking about atomicWrite... if we crash while that's writing, we
> won't get a corrupt DB. But we will be left with an out of date DB, and a
> temp file. I wonder if we should detect that temp file on startup and force
> a DB rebuild.
Note that, even in (non-default) paranoid mode, atomicWrite cannot fully guarantee that we won't have a corrupt db, so ensuring that we do not destroy Firefox in case of, say, power failure during writes, having a mechanism to rebuild the db is always a good idea.

Created attachment 784811[details][diff][review]
retrySave v3 - If the JSON file is not readable at startup, try to save the new DB anyway - review comments
Bug 888753 was causing hangs in the test suite with the new DeferredSave implementation, so I ended up having to go through the test suite making sure every shutdownManager call was deferred to the next event loop rather than happening inside a callback from AddonManager.
There was also a chunk of work to make the "locked file" tests do something reasonable on both Windows and Unix.
This patch also cleans up some new JS warnings; there's a bit of conflict with Felipe's last patch because he made some of the same changes. I'll resolve those on landing.
(In reply to Blair McBride [:Unfocused] from comment #82)
> Comment on attachment 765573[details][diff][review]
> retrySave v2 - If the JSON file is not readable at startup, try to save the
> new DB anyway
>
> Review of attachment 765573[details][diff][review]:
> -----------------------------------------------------------------
>
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> @@ -3188,5 @@
> > - // Avoid using XPIDatabase.dbFileExists, as that code is lazy-loaded,
> > - // and we want to avoid loading it until absolutely necessary.
>
> Why remove this?
XPIDatabase.dbFileExists was removed because it wasn't needed anywhere else.
> ::: toolkit/mozapps/extensions/XPIProviderUtils.js
> @@ -579,5 @@
> > - // XXX preserve unknown fields during save/restore
>
> Still planning on doing this, right?
I'll file a follow up bug
> @@ +927,5 @@
> > })
> > .then(count => {
> > + // If our last attempt to read or write the DB failed, force a new
> > + // extensions.ini to be written to disk on the next startup
> > + let result = this.lastError;
>
> Nit: Don't really need this extra variable.
XPIDatabase.lastError is a getter function that uses _deferredSave, which is destroyed in this code block, so I do need to store the value. I've renamed the variable.
(In reply to Blair McBride [:Unfocused] from comment #86)
> Comment on attachment 774164[details][diff][review]
> DeferredSave v8 - Tag log messages with the file name
>
> Review of attachment 774164[details][diff][review]:
> -----------------------------------------------------------------
>
> ::: toolkit/mozapps/extensions/DeferredSave.jsm
> @@ +41,5 @@
> > + // Set up loggers for this instance of DeferredSave
> > + Cu.import("resource://gre/modules/AddonLogging.jsm");
>
> Only need to import this once, at the top of the file - not for every
> instance.
Hrrm, hrroom. I'm not importing XPCUtils yet (for the lazy module loader), so i'm trapped in a twisty web of overhead-tradeoffs...
> @@ +88,5 @@
> > + get dirty() {
> > + return this._pending || this.writeInProgress;
> > + },
> > +
> > + get error() {
>
> Nit: For consistency, should this be called lastError, like the private
> property and on XPIDatabase?
OK, done.

Created attachment 784823[details][diff][review]
Telemetry v2: measures for XPI Database load/saves + review comments
(In reply to Vladan Djeric (:vladan) from comment #76)
> Comment on attachment 770975[details][diff][review]
> Telemetry v1: Add some simple measures for XPI Database load/saves
>
> - I don't think we're consistently reporting the update reason when there
> are multiple reasons to update: sometimes we only report the latest reason
> (line 3174) and sometimes we report an earlier reason (line 3170). We should
> be consistent about this.. either only report the earliest/latest reason or
> report all reasons in a simpleMeasures array (my preference)
OK. I made it an object, rather than an array of strings - do you prefer one or the other?
> - This is mostly a personal preference: let's not do any heavy lifting in
> if-conditions. So let's move the calls to processPendingFileChanges,
> installDistributionAddons, and others inside or outside the if-blocks as
> appropriate.
OK
> - Rename updateDatabase to updateReason or something similar
> - I think the coding style now requires curlies around all if-else blocks
> - Style request/nit: Move the "update reason 8" logic outside the call to
> recordSimpleMeasure and make it explicit:
OK
> - Why did you remove the comment at the beginning of XPIDB_openConnection?
Because I put the telemetry calls at the places where I call XPIDB_syncLoadDB (the new name for openConnection)
> - Does the call to Ci.nsIAppStartup trigger initialization of the service?
> If so, how expensive is it?
It looks like it does a fair bit of work, and it's really not designed to support this use case. I've switched to observing final-ui-startup in XPIProvider.jsm to keep track of where we are in startup, but it happens quite a while before first paint so it might not be the best choice either. I'm open to suggestions here...
(In reply to Blair McBride [:Unfocused] from comment #84)
> Comment on attachment 770975[details][diff][review]
> Telemetry v1: Add some simple measures for XPI Database load/saves
>
> Review of attachment 770975[details][diff][review]:
> -----------------------------------------------------------------
>
> Would be good to have a measurement that could help inform us if the value
> of ASYNC_SAVE_DELAY_MS is appropriate.
I'm tracking the number of saves per session (XPIDB_saves_total), and the number of times the data is modified while it's being written (XPIDB_saves_overlapped); hopefully those will help.
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> @@ +3142,5 @@
> >
> > // If the application version has changed then the database information
> > + // needs to be updated. Set this to an integer, 0 means no update needed,
> > + // positive integers represent various reasons.
> > + let updateDatabase = aAppChanged ? 1 : 0;
>
> Magic numbers are the devil. These should all be documented named constants
> at the top of the file.
At Vladan's suggestion I've made all these different attributes of an object, so we'll get a list of names in the telemetry message.
> ::: toolkit/mozapps/extensions/XPIProviderUtils.js
> @@ +570,5 @@
> > // Now do some sanity checks on our JSON db
> > if (!("schemaVersion" in inputAddons) || !("addons" in inputAddons)) {
> > // Content of JSON file is bad, need to rebuild from scratch
> > ERROR("bad JSON file contents");
> > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_invalid", 1);
>
> Can you name measures like this XPIDB_error_XXX ? Makes it easier to filter
> on this type of measure, versus non-error measurements.
OK
> Though, saying that, most (all?) of these error measures are reasons to
> rebuild the database, so maybe it should be an array of reasons in one
> single measure, like the measure added in XPIProvider.
I'm pretty sure we'll never get more than one of these in a single run.
> @@ +579,5 @@
> > // Handle mismatched JSON schema version. For now, we assume
> > // compatibility for JSON data, though we throw away any fields we
> > // don't know about
> > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_schemaMismatch",
> > + inputAddons.schemaVersion);
>
> Is there any reasonable way to correlate this with XPIDB_startup_load=3 ?
They'll almost always happen together - I think they'll only be different if we saved the DB but failed to save the prefs on an XPI DB version upgrade.
> @@ +656,5 @@
> > *
> > * @return Promise<Map> resolves to the Map of loaded JSON data stored
> > * in this.addonDB; never rejects.
> > */
> > + getDatabase: function XPIDB_asyncOpen() {
>
> Would be good to get measurements for how long the main-thread synchronous
> parts of this take (TextDecoder, parseDB, upgradeDB, and unreadableDB).
OK.
> @@ +664,5 @@
> > }
> >
> > LOG("Starting async load of XPI database " + this.jsonFile.path);
> > + // Track in telemetry: did the async load start while XPIProvider
> > + // was initializing? before first paint?
>
> Would be nice to have something similar for the sync version too, so we can
> tell how often that happens outside of startup.
I used the same XPIProvider.runPhase as I added for the async version.
> @@ +665,5 @@
> >
> > LOG("Starting async load of XPI database " + this.jsonFile.path);
> > + // Track in telemetry: did the async load start while XPIProvider
> > + // was initializing? before first paint?
> > + let loadTime = 3;
>
> Death to magic numbers!
Now using defined constants in XPIProvider and assigned to XPIProvider.runPhase
> @@ +675,5 @@
> > + if (!startupInfo.firstPaint) {
> > + loadTime = 2;
> > + }
> > + }
> > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_async_load", loadTime);
>
> Nit: Could do with some white space around this block of telemetry stuff.
OK
> @@ +952,5 @@
> > + AddonManagerPrivate.recordSimpleMeasure(
> > + "XPIDB_total_saves", this._deferredSave.totalSaves);
>
> Naming these XPIDB_saves_XXX makes it easier to filter on.
OK
> @@ +963,5 @@
> > ERROR("Flush of XPI database failed", error);
>
> Add a measurement for this?
OK
> @@ +1143,5 @@
> > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_late_sync_open", 1);
>
> Ditto with the magic with these.
Switched from one measure name, with separate numeric values to identify the caller, to different measure names for each caller and XPIProvider.runPhase as the numeric value. I suppose I could make runPhase a string instead...

> > - I don't think we're consistently reporting the update reason when there
> > are multiple reasons to update: sometimes we only report the latest reason
> > (line 3174) and sometimes we report an earlier reason (line 3170). We should
> > be consistent about this.. either only report the earliest/latest reason or
> > report all reasons in a simpleMeasures array (my preference)
>
> OK. I made it an object, rather than an array of strings - do you prefer one
> or the other?
- It's a bit odd to have an object whose property values are always "1", I think an array makes more sense
- We've also added what looks like ~20 different new simple measures with this patch. Can we group them for easier reading? For example:
- simpleMeasurements.XPIDB_startup_errors =
["badJSON", "schemaMismatch"];
- simpleMeasurements.XPIDB_startup_timings =
{"rebuildBadJSON_MS": 38, "XPIDB_syncRead_MS": 72 };
- simpleMeasurements.XPIDB_init_info =
{"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 };
- Nit: Can you add a new helper function "recordDuration" that accepts a timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure", instead of doing the math in each of the "recordSimpleMeasure" calls?
> // Checks if all expected events have been seen and if so calls the callback
> function check_test_completed(aArgs) {
> if (!gNext)
>- return;
>+ return undefined;
Does check_test_completed every return anything other than undefined?

(In reply to Vladan Djeric (:vladan) from comment #98)
> - It's a bit odd to have an object whose property values are always "1", I
> think an array makes more sense
Oh good, I was starting to wonder if I was the only one that *hates* seeing objects used like this :)
But I'd much prefer a Set (API is far more suited to this), passed to telemetry as an array.
And as a bonus, that means you don't need a separate updateDatabase variable (just check the size of updateReasons).
> - We've also added what looks like ~20 different new simple measures with
> this patch. Can we group them for easier reading? For example:
> - simpleMeasurements.XPIDB_startup_errors =
> ["badJSON", "schemaMismatch"];
+1
> - simpleMeasurements.XPIDB_startup_timings =
> {"rebuildBadJSON_MS": 38, "XPIDB_syncRead_MS": 72 };
-1 - this makes it hard to query/filter.
> - simpleMeasurements.XPIDB_init_info =
> {"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 };
The values for these can only be 1, and there can only be one of them. So it should be more like:
simpleMeasurements.XPIDB_lateOpenReason = "byType";
> - Nit: Can you add a new helper function "recordDuration" that accepts a
> timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure",
> instead of doing the math in each of the "recordSimpleMeasure" calls?
+1, but should use TelemetryStopwatch.jsm for this.

(In reply to :Irving Reid from comment #95)
> > - Does the call to Ci.nsIAppStartup trigger initialization of the service?
> > If so, how expensive is it?
>
> It looks like it does a fair bit of work
Do you mean the component initialization, or just getStartupInfo()? Because that component is guaranteed to already be initialized on startup (before profile-do-change even).

(In reply to Blair McBride [:Unfocused] from comment #105)
> > - We've also added what looks like ~20 different new simple measures with
> > this patch. Can we group them for easier reading? For example:
> > - simpleMeasurements.XPIDB_startup_errors =
> > ["badJSON", "schemaMismatch"];
>
> +1
Or, if it can only be one error, then:
simpleMeasurements.XPIDB_startupErrors = "badJSON";

(In reply to Blair McBride [:Unfocused] from comment #105)
> > - simpleMeasurements.XPIDB_init_info =
> > {"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 };
>
> The values for these can only be 1, and there can only be one of them. So it
> should be more like:
> simpleMeasurements.XPIDB_lateOpenReason = "byType";
Er, kindly ignore the part where I completely forget about runPhase :\
(Yay bugspam!)

Created attachment 790813[details][diff][review]
Telemetry v3: more review comments
(In reply to Vladan Djeric (:vladan) from comment #98)
> - It's a bit odd to have an object whose property values are always "1", I
> think an array makes more sense
Changed to an array of strings.
> - We've also added what looks like ~20 different new simple measures with
> this patch. Can we group them for easier reading? For example:
> - simpleMeasurements.XPIDB_startup_errors =
> ["badJSON", "schemaMismatch"];
There's only one error per startup, so XPIDB_startupError = "badJSON" or whatever
> - simpleMeasurements.XPIDB_startup_timings =
> {"rebuildBadJSON_MS": 38, "XPIDB_syncRead_MS": 72 };
Based on Blair's comment, left these as is
> - simpleMeasurements.XPIDB_init_info =
> {"XPIDB_lateOpen_byType": 1, "XPIDB_lateOpen_forInternalName": 2 };
Again, there will only ever be one per ping, and I wanted to keep the ability to pass back the startup phase as the value
> - Nit: Can you add a new helper function "recordDuration" that accepts a
> timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure",
> instead of doing the math in each of the "recordSimpleMeasure" calls?
Done. I went back and forth about the approach I took (of returning a mini-object with a "done()" method) and just returning a fully closed up function - the function approach creates fewer JS objects, but doesn't look quite as natural for API consumers - it would look something like this:
(definition)
startTimer: function(aName) {
let startTime = Date.now();
return () => this.recordSimpleMeasure(aName, Date.now() - startTime);
},
(usage)
let readTimeRecorder = AMI.startTimer("XPIDB_timer");
... do interesting stuff
readTimeRecorder();
If I could come up with a great naming convention for the variable holding the timer function I'd feel more comfortable, but as is it feels a bit unnatural to my object-addled brain.
As an alternative, the measure name parameter could move from the startTimer() function to the timeRecorder, and you could do:
let durationRecorder = blah.startTimer();
...
durationRecorder("measure1");
...
durationRecorder("measure2");
but I'm not sure anyone actually needs that...
> > // Checks if all expected events have been seen and if so calls the callback
> > function check_test_completed(aArgs) {
> > if (!gNext)
> >- return;
> >+ return undefined;
>
> Does check_test_completed every return anything other than undefined?
Yes, some of the tests fail if the "we really are complete" case doesn't return the result of calling the callback.
(In reply to Blair McBride [:Unfocused] from comment #105)
> (In reply to Vladan Djeric (:vladan) from comment #98)
> > - It's a bit odd to have an object whose property values are always "1", I
> > think an array makes more sense
>
> Oh good, I was starting to wonder if I was the only one that *hates* seeing
> objects used like this :)
>
> But I'd much prefer a Set (API is far more suited to this), passed to
> telemetry as an array.
>
> And as a bonus, that means you don't need a separate updateDatabase variable
> (just check the size of updateReasons).
I could kind of smell it while I was writing it ;-> Set is overkill for this case, I really only need Array.push() / Array.length.
...
> > - Nit: Can you add a new helper function "recordDuration" that accepts a
> > timestamp & subtracts it from Date.now() and calls "recordSimpleMeasure",
> > instead of doing the math in each of the "recordSimpleMeasure" calls?
>
> +1, but should use TelemetryStopwatch.jsm for this.
TelemetryStopwatch only supports histograms, not simple measurements. This would be a good feature for 852974.
(In reply to Blair McBride [:Unfocused] from comment #109)
> Comment on attachment 784823[details][diff][review]
> Telemetry v2: measures for XPI Database load/saves + review comments
>
> Yea, I think I'd prefer a string - otherwise we'll have to keep looking up
> the numbers, and it'll get complicated when we want to add a runPhase
> between 1 and 2.
Strung.
> ::: toolkit/mozapps/extensions/XPIProvider.jsm
> @@ +3160,5 @@
> > + // Keep track of whether and why we need to open and update the database at
>
> Uber nit: Trailing whitespace.
Killed that one and a few others.
...
> @@ +1025,5 @@
> > // finish cleaning up, and then call back
> > this.flush()
> > .then(null, error => {
> > ERROR("Flush of XPI database failed", error);
> > + AddonManagerPrivate.recordSimpleMeasure("XPIDB_flush_failed", 1);
>
> Nit: XPIDB_shutdownFlush_failed
Agreed.

(In reply to :Irving Reid from comment #115)
> Done. I went back and forth about the approach I took (of returning a
> mini-object with a "done()" method)
Yea, this will do. Should be temporary anyway, as I filed bug 906524 to make TelemetryStopwatch support simple measurements.
> I could kind of smell it while I was writing it ;-> Set is overkill for this
> case, I really only need Array.push() / Array.length.
Haha, fair enough.

Comment on attachment 790813[details][diff][review]
Telemetry v3: more review comments
Review of attachment 790813[details][diff][review]:
-----------------------------------------------------------------
One last issue I just noticed...
::: toolkit/mozapps/extensions/AddonManager.jsm
@@ +2134,5 @@
> + },
> +
> + // Start a timer, record a simple measure of the time interval when
> + // timer.done() is called
> + simpleTimer: function(name) {
Nit: aName
::: toolkit/mozapps/extensions/XPIProviderUtils.js
@@ +651,2 @@
> }
> + parseTimer.done();
This is inconsistently recorded. Above, if we rebuild due to bad JSON, it's never recorded (early return). But here, if we rebuild due to other reasons, it does get recorded.

Created attachment 794065[details][diff][review]
Telemetry v4: more review comments
Took advantage of the opportunity to remove a few unused const definitions.
(In reply to Blair McBride [:Unfocused] from comment #119)
> Comment on attachment 790813[details][diff][review]
> Telemetry v3: more review comments
>
> One last issue I just noticed...
> ...
> ::: toolkit/mozapps/extensions/XPIProviderUtils.js
> @@ +651,2 @@
> > }
> > + parseTimer.done();
>
> This is inconsistently recorded. Above, if we rebuild due to bad JSON, it's
> never recorded (early return). But here, if we rebuild due to other reasons,
> it does get recorded.
Good catch. I think I was thinking that I wouldn't record the parse time if the parse failed, but I wasn't doing that consistently either. I've changed it to always record the parse time before the DB rebuild if a rebuild is needed, but including both the parse and building the DBAddonInternal objects on the happy path.
(In reply to Vladan Djeric (:vladan) from comment #117)
> Comment on attachment 790813[details][diff][review]
> Telemetry v3: more review comments
>
> >+ // Start a timer, record a simple measure of the time interval when
> >+ // timer.done() is called
> >+ simpleTimer: function(name) {
> >+ let startTime = Date.now();
> >+ return {
> >+ done: () => AddonManagerPrivate.recordSimpleMeasure(name, Date.now() - startTime)
> >+ };
> > }
>
> I'd prefer to have the measurement code be in a function, but I'll let the
> module owner decide
Not sure what you mean by that - are you referring to my alternate design of returning a closed up function rather than an object?