Add a test platform for win64 ASan

Status

Taskcluster is a platform that Mozilla has developed for running tasks. This product is for bugs with the Taskcluster platform itself, not for issues with the configuration of the Mozilla Taskcluster instance.

I'd like to add a test platform for win64 ASan. But for now, I guess we only want to initiate tests on Try because we haven't fixed any issues that ASan reports on Windows yet and we don't know how bad the current status is. Once it's testable on Try, we can start to fix the failed tests.

(In reply to Ting-Yu Chou [:ting] from comment #3)
> :jlund, could you give me some hints on how to achieve this? Thank you.
Hi,
two thoughts:
1. as far as I know, we don't have asan windows builds. asan builds are linux only.
2. our tier 1 windows tests are still in buildbot (old continuous integration tool) and in the midst of being ported to taskcluster (new continuous integration tool, currently tier 3)
Do you need windows asan builds? Is that a thing? Do you need this to be tier 1 or tier 3? With tier 3, you can run them, verify on try, and self serve your own tests but they won't ever block and fail continous-integration tests until windows on taskcluster becomes tier 1, some point in 2017.

(In reply to Jordan Lund (:jlund) from comment #4)
> 1. as far as I know, we don't have asan windows builds. asan builds are
> linux only.
Yes, but now we can build it locally for Windows, see bug 1030826 comment 2.
> 2. our tier 1 windows tests are still in buildbot (old continuous
> integration tool) and in the midst of being ported to taskcluster (new
> continuous integration tool, currently tier 3)
>
> Do you need windows asan builds? Is that a thing?
Yes, as Windows is the top platform of our user base.
> Do you need this to be
> tier 1 or tier 3? With tier 3, you can run them, verify on try, and self
> serve your own tests but they won't ever block and fail
> continous-integration tests until windows on taskcluster becomes tier 1,
> some point in 2017.
For now, tier 3 is fine because we're just about to start fixing the errors that ASan reports. But once the errors are fixed, I'd prefer to promote it to tier 1 (no matter taskcluster becomes tier 1 or not).

(In reply to David Major [:dmajor] from comment #6)
> > Do you need windows asan builds? Is that a thing?
>
> This bug is in support of _making_ Windows ASan be a thing. :)
apologies, I read the bug as you wanting to add a Windows ASan test not build. I'll try and point you in the right direction. I'll report back by EOD

(In reply to Jordan Lund (:jlund) from comment #7)
> (In reply to David Major [:dmajor] from comment #6)
> > > Do you need windows asan builds? Is that a thing?
> >
> > This bug is in support of _making_ Windows ASan be a thing. :)
>
> apologies, I read the bug as you wanting to add a Windows ASan test not
> build. I'll try and point you in the right direction. I'll report back by EOD
status update: I'm currently distracted with releases in flight (it's release week). Will report back tomorrow.

(In reply to Ting-Yu Chou [:ting] from comment #12)
> Just walked through the steps, #2 doesn't work as custom-build-variant-cfg
> is an invalid property for Windows [1], it is used [2] for Linux.
I guess what Windows needs are something like:
testing/mozharness/configs/builds/taskcluster_firefox_win64_asan_opt.py
browser/config/mozconfigs/win64/opt-asan
and set taskcluster_firefox_win64_asan_opt.py in run/config under win64-asan/opt.

The generated taskgraph does not contain tests like:
"test-win64-asan/opt-cppunit",
"test-win64-asan/opt-crashtest",
"test-win64-asan/opt-mochitest-1"
Not sure where to add for those, is it taskcluster/ci/test/test-platforms.yml? If it is, how should I modify the file? There're "windows10-64-vm/debug", "windows10-64-vm/otp" but I don't see how/where they are used.

(In reply to Ting-Yu Chou [:ting] from comment #14)
> taskcluster/ci/test/test-platforms.yml? If it is, how should I modify the
> file? There're "windows10-64-vm/debug", "windows10-64-vm/otp" but I don't
> see how/where they are used.
I guess the file is correct, and they're applied regarding the build-platform, so I should put something like:
windows2012-64-asan/opt:
build-platform: win64-asan/opt
test-sets:
- common-tests
and add "windows2012-64-asan" to WORKER_TYPE [1], but I don't know what is the worker type. BTW, I never understand why it is "Windows 2012" (e.g., Windows 2012 opt), do you know why?
[1] https://dxr.mozilla.org/mozilla-central/source/taskcluster/taskgraph/transforms/tests.py#45

I'll defer to grenade/dustin/jlund on this, this is a bit outside my area of expertise.
I can help with any questions about how taskcluster Windows workers are set up, how they execute tasks, managing privileges (both OS/system level and taskcluster level) or provide help regarding setting up new environments, customising worker behaviour, making toolchains available, managing worker configuration etc.
Regarding the implementation of specific gecko builds, the configs used in-tree and how those configs are organised, would be mostly RelEng domain knowledge (e.g. grenade/jlund/Callek/catlee/coop/kmoir/bhearsum/nthomas/... etc - or #releng in IRC), or the in-tree task scheduling mechanics in gecko via mach is best known by dustin (although several people have actively contributed there and may also be able to help, such as jlund/ahal/grenade/...).
Sorry I couldn't give an absolute answer there, but hopefully the other contact details help.

:ting you are correct re the mozharness configs in comment 13.
regarding platforms: on taskcluster we currently do all *build* work on Windows Server 2012 (both 32 bit & 64 bit builds are run on a 64 bit Windows Server 2012 os, with environment configuration for x86/x86_64 compilation/linking) and run ui *test* suite work on Windows 7 (for testing of 32 bit binaries) and Windows 10 (for testing of 64 bit binaries).
so if you want to follow the current convention, asan 64 bit builds should run on Windows Server 2012 and any test suites against the binaries from those builds should run on Windows 10.
if tests require a GPU they run on windows7-32 or windows10-64
if tests do not require a GPU they run on windows7-32-vm or windows10-64-vm
i hope that answers your questions, feel free to ping me (:grenade) on #taskcluster for anything i missed or didn't explain properly

Comment on attachment 8837406[details][diff][review]
wip
Review of attachment 8837406[details][diff][review]:
-----------------------------------------------------------------
I think you've got the right idea here, just be careful what you're enabling :)
::: taskcluster/ci/test/test-platforms.yml
@@ +139,5 @@
>
> +windows10-64-asan/opt:
> + build-platform: win64-asan/opt
> + test-sets:
> + - common-tests
Note that when this is green and the `run-on-projects: []` is removed, it will add a nontrivial additional testing load. That might be OK, just plan ahead and talk to people with budget experience (like garndt), since it could be adding a number with a bunch of zeroes to the budget.
Also, please use the `mach taskgraph` subcommands to double-check that this is not going to introduce these tasks by default anyway. I don't remember if the test jobs (which aren't tagged with `run-on-projects: []`) will run and pull in the build jobs they depend on.

(In reply to Ting-Yu Chou [:ting] from comment #21)
> c. There's also:
>
> windows-2012-32 opt Cc[tier2] (Clang-Tidy ClangCL)
>
> So I guess I should follow the convention to be:
>
> windows-2012-64 asan
>
> but I don't know how to make it.
It seems that this is also set by treeherder.platform, taskcluster/ci/build/windows.yml uses "windows2012-64" but taskcluster/ci/toolchain/windows.yml uses "windows-2012-64". I guess I'll just follow the convention in taskcluster/ci/build/windows.yml.

(In reply to Ting-Yu Chou [:ting] from comment #21)
> The Try run [1] still has something different from what I expect:
>
> a. I found the test log inside the build task [2], how should I make it like
> linux64 asan which each test are listed under "Linux x64 asan" separately?
Those seems are xpcshell test and also existed for linux x64 asan build which I shouldn't worry about.

(In reply to Ting-Yu Chou [:ting] from comment #27)
> Ehsan, do you know what's the difference between using link.exe or clang.exe
> to link on windows?
The main difference is that if you use clang.exe, it will construct the correct linker command line, including the static library needed to link against an ASan DLL, and otherwise you need to pass that to the linker manually in LDFLAGS. Our build system just uses link.exe directly, for reasons I'm not sure of. I think at some point I tried to change that for clang-cl builds but it was difficult since we add linker arguments to LDFLAGS that can't be passed directly to clang-cl, so for ASan builds we always used a mozconfig which sets up the ASan library flags, for example see the one posted in bug 1307561 comment 0.
I honestly think switching to use clang-cl.exe as the linker may be too much work for very little gain. Perhaps we should add some code to sanitize.m4 to do that for us so that we don't need to modify the mozconfig files manually?

(In reply to :Ehsan Akhgari from comment #28)
> always used a mozconfig which sets up the ASan library flags, for example
> see the one posted in bug 1307561 comment 0.
Yes, I am doing the same here. But now the other *.exe, like obj-asan\toolkit\mozapps\update\tests\TestAUSReadStrings.exe needs clang_rt.asan_dynamic-x86_64.dll to run.
> work for very little gain. Perhaps we should add some code to sanitize.m4
> to do that for us so that we don't need to modify the mozconfig files
> manually?
Yeah, I am thinking to either 1) strip out "-fsanitize=address" for the compilation that is not for firefox.exe or 2) somehow let the dll can be found. Just I need to figure out how to do those.

(In reply to Ting-Yu Chou [:ting] from comment #29)
> (In reply to :Ehsan Akhgari from comment #28)
> > always used a mozconfig which sets up the ASan library flags, for example
> > see the one posted in bug 1307561 comment 0.
>
> Yes, I am doing the same here. But now the other *.exe, like
> obj-asan\toolkit\mozapps\update\tests\TestAUSReadStrings.exe needs
> clang_rt.asan_dynamic-x86_64.dll to run.
Ah I see what the problem is. Did you package the build? This is supposed to take care of copying the DLL to the packaged build: <http://searchfox.org/mozilla-central/rev/12cf11303392edac9f1da0c02e3d9ad2ecc8f4d3/browser/installer/package-manifest.in#801>
> > work for very little gain. Perhaps we should add some code to sanitize.m4
> > to do that for us so that we don't need to modify the mozconfig files
> > manually?
>
> Yeah, I am thinking to either 1) strip out "-fsanitize=address" for the
> compilation that is not for firefox.exe or 2) somehow let the dll can be
> found. Just I need to figure out how to do those.
I think (2) is a better solution. See above.
If packaging the build before running the test isn't an option, we can set the the $PATH environment variable to where that DLL is located.

(In reply to :Ehsan Akhgari from comment #30)
> > Yeah, I am thinking to either 1) strip out "-fsanitize=address" for the
> > compilation that is not for firefox.exe or 2) somehow let the dll can be
> > found. Just I need to figure out how to do those.
>
> I think (2) is a better solution. See above.
>
> If packaging the build before running the test isn't an option, we can set
> the the $PATH environment variable to where that DLL is located.
I see, will go ahead with (2). I've read MSDN for the DLL search order, yes, set the $PATH seems the easiest way to workaround it, but I'm not sure how to set it for all possible *.exe invocation. I'll try to figure it out.

I am not sure is this question for you, but how can I add a directory to the $PATH environment variable, so it is applied globally for tasks on a test machine, for instance one of the check-test in objdir/toolkit/mozapps/update/tests/Makefile?
I need it to be globally because it is for locating a dll.

(In reply to :Ehsan Akhgari from comment #30)
> If packaging the build before running the test isn't an option, we can set
> the the $PATH environment variable to where that DLL is located.
How is this different from any other DLL that we rely on? (mozglue, win-apiset-whatever, etc.) Needing to touch $PATH seems like it's approaching the problem from the wrong angle.

For the dll that Firefox.exe relies on, they seem to be located in objdir/dist/bin and will be copied to the package by the code in comment 30. The problem now is the other executable e.g., objdir/toolkit/mozapps/update/tests/TestAUSReadStrings.exe also needs clang_rt.asan_dynamic-x86_64.dll (which is already in objdir/dist/bin) when we run the test.

I wonder if we could do any of:
- Don't ASan for programs that are not in the package (this only works if the programs don't depend on Firefox DLLs, but if they do, the programs should be in dist/bin already), or
- Copy these programs to dist/bin, or
- Copy the ASan DLL to the location of these programs
Setting $PATH still seems like a really big hammer...

I tend to agree with dmajor, that if the dll(s) exists already somewhere in the test's task directory (e.g. under `<task_dir>\objdir` rather than `<task_dir>\dist\bin`) that it might be best to move/copy/symlink those to an existing folder in the PATH during the test setup phase (or adapt whatever installs them in the first place, to put them in the preferred location).
If they are available at build time, but not at test time, it might be best to package them up with the tests (depending on how big they are) or make them available e.g. via tooltool so that they can be downloaded as part of the test setup.
If the required DLLs are installed globally on the system (e.g. because they are included in a system clang installation, rather than being somewhere inside the task folder) and we want to make these system DLLs available to all tasks, that would be a different matter, and we might be able to set the default task PATH to include them (so long as this is unlikely to break other tasks). However, ideally we try to avoid placing toolchains on the system, but prefer that tasks extract required toolchains inside their task folder, wherever possible, to avoid conflicts and difficult-to-manage system dependencies (with tasks having varying/conflicting requirements with toolchain versions etc).

(In reply to David Major [:dmajor] from comment #35)
> I wonder if we could do any of:
> - Don't ASan for programs that are not in the package (this only works if
> the programs don't depend on Firefox DLLs, but if they do, the programs
> should be in dist/bin already), or
This hides bugs that get triggered by our compiled tests.
> - Copy these programs to dist/bin, or
> - Copy the ASan DLL to the location of these programs
Either of these sound good to me. Honestly whichever is the easiest to implement is better IMO.
(In reply to Pete Moore [:pmoore][:pete] from comment #38)
> If they are available at build time, but not at test time, it might be best
> to package them up with the tests (depending on how big they are) or make
> them available e.g. via tooltool so that they can be downloaded as part of
> the test setup.
tooltool isn't a good solution since the DLL in question comes from our toolchain which comes from the in-tree build scripts, which we download their binaries from toolchain. :-)
> If the required DLLs are installed globally on the system (e.g. because they
> are included in a system clang installation, rather than being somewhere
> inside the task folder) and we want to make these system DLLs available to
> all tasks, that would be a different matter, and we might be able to set the
> default task PATH to include them (so long as this is unlikely to break
> other tasks). However, ideally we try to avoid placing toolchains on the
> system, but prefer that tasks extract required toolchains inside their task
> folder, wherever possible, to avoid conflicts and difficult-to-manage system
> dependencies (with tasks having varying/conflicting requirements with
> toolchain versions etc).
We shouldn't install them globally for the same reason (they can be different for different build jobs.)

I really don't know the answer to that, but it does seem like the packager is in the wrong here -- it's generating a file with a name that overlaps with a different build. But then, that's not the part I know about, so you probably expected me to say it's the part that's wrong :)

(In reply to Ting-Yu Chou [:ting] from comment #40)
> It seems packager [2] and taskcluster [3] does not use the same package
> name, how should I fix this? Should I 1) define MOZ_SIMPLE_PACKAGE_NAME
> somewhere, 2) fix the packager, or 3) fix the taskcluster?
I chose (3) because I don't see a way to pass task's build_platform to the packager, also it's not something that the packager really need to know.
(In reply to David Major [:dmajor] from comment #35)
> I wonder if we could do any of:
> - Don't ASan for programs that are not in the package (this only works if
> the programs don't depend on Firefox DLLs, but if they do, the programs
> should be in dist/bin already), or
> - Copy these programs to dist/bin, or
> - Copy the ASan DLL to the location of these programsBug 1051190 is how we do it on Mac, it scans all the files and rewrite the path to the dynamic library if the file reference the ASan dylib.

(In reply to Ting-Yu Chou [:ting] from comment #42)
> (In reply to Ting-Yu Chou [:ting] from comment #40)
> > It seems packager [2] and taskcluster [3] does not use the same package
> > name, how should I fix this? Should I 1) define MOZ_SIMPLE_PACKAGE_NAME
> > somewhere, 2) fix the packager, or 3) fix the taskcluster?
>
> I chose (3) because I don't see a way to pass task's build_platform to the
> packager, also it's not something that the packager really need to know.
Found a better solution, adding "export MOZ_PKG_SPECIAL=asan" in mozconfig.

there are many win 10 tests that are not yet green.
https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&filter-searchStr=windows10&group_state=expanded&filter-tier=3
^ this search shows the few win 10 tests that are regularly green.
if it helps, i can give you some credentials for the windows 10 gpu worker type that would allow you to rdp to the tc windows 10 instances that may help debug what's happening (its a little complicated because the tasks run as a new worker each time, so it's hard to connect as the task user, but you can get on the instance as root or generic-worker to see what's on the filesystem, or run tests from the command prompt). ping me (grenade) on irc.

(In reply to Rob Thijssen (:grenade - GMT) from comment #47)
> there are many win 10 tests that are not yet green.
Yeah, I tried and most of them are timed out. :(
https://treeherder.mozilla.org/#/jobs?repo=try&revision=c79d609647bcc939405fe8e19753020c52182513&filter-tier=1&filter-tier=2&filter-tier=3&group_state=expanded
Are there any bugs filed for this?
> if it helps, i can give you some credentials for the windows 10 gpu worker
> type that would allow you to rdp to the tc windows 10 instances that may
> help debug what's happening (its a little complicated because the tasks run
> as a new worker each time, so it's hard to connect as the task user, but you
> can get on the instance as root or generic-worker to see what's on the
> filesystem, or run tests from the command prompt). ping me (grenade) on irc.
That'd be very helpful, thanks for that. Will try to ping you, I'm in UTC+8.

(In reply to Rob Thijssen (:grenade - GMT) from comment #49)
> Hi :ting, if you add a gpg/pgp fingerprint to your Mozilla phonebook entry,
> I can email you some credentials.
Messaged you on IRC already, just in case if you don't receive it, I've added the fingerprint. Thank you.
Clear NI to :pmoore as I'm going to debug on a worker with :grenade's help.

main difference is that loan-t-win10-64-gpu-01 is restricted to a single instance at any moment, whereas gecko-t-win10-64-gpu will have multiple running simultaneously. this is why large numbers of jobs assigned to the loaner will take longer to complete than when assigned to the regular worker type (i can up the instance limit if you need that). instances are spawned based on demand (number of pending jobs assigned to that worker type). when an instance is idle for 15 minutes or so, it is terminated (unless an active rdp session is detected) and it's ip address is released back to the pool. you can keep the instance alive by keeping an rdp session open.
- to avoid queuing up all the tests, the try syntax can be changed (to eg: try: -b o -p win64 -u cppunit -t none)
- to spin up a new instance without waiting for a new build to complete, individual tests can also be retriggered from the treeherder interface
- individual tasks can be edited (inspect task > task details > Edit and recreate). you can even run arbitrary commands using this mechanism (by changing one or more of the commands in the payload)
i'm happy to go through this in a vidyo session if that's useful and practicable with our time zone differences. i spend a lot of time debugging windows taskcluster worker issues so i'm happy to share what i've learned

(In reply to Rob Thijssen (:grenade - GMT) from comment #53)
> - to spin up a new instance without waiting for a new build to complete,
> individual tests can also be retriggered from the treeherder interface
I retriggered test M1 for both try runs in comment 53 yesterday before I left the office, but they're still pending now. Not sure how long does it take to spin up a new instance.
> i'm happy to go through this in a vidyo session if that's useful and
> practicable with our time zone differences. i spend a lot of time debugging
> windows taskcluster worker issues so i'm happy to share what i've learned
That'd be great, but can we do it later after I rdp to an instance and able to reproduce the timed out?

(In reply to Ting-Yu Chou [:ting] from comment #54)
> (In reply to Rob Thijssen (:grenade - GMT) from comment #53)
> > - to spin up a new instance without waiting for a new build to complete,
> > individual tests can also be retriggered from the treeherder interface
>
> I retriggered test M1 for both try runs in comment 53 yesterday before I
> left the office, but they're still pending now. Not sure how long does it
> take to spin up a new instance.
I still can't manage to spin up a new instance, the retriggered jobs were pending til deadline-exceeded.

yes, apologies. i haven't been able to figure it out yet either. there were some missing scopes on the new worker type, but after getting those added, i also failed to get one to spin up. will investigate again today as folks with access to provisioner logs come online.

Comment on attachment 8845793[details]Bug 1333003 part 5 - Include ASan runtime dll in common.tests.zip.
https://reviewboard.mozilla.org/r/118940/#review124000
::: python/mozbuild/mozbuild/action/test_archive.py:512
(Diff revision 3)
> }
> for path in set(generated_harness_files) - packaged_paths:
> entry['patterns'].append(path[len('_tests') + 1:])
> extra_entries.append(entry)
>
> + if buildconfig.defines['MOZ_ASAN'] and buildconfig.substs['CLANG_CL']:
Given that all the other package definitions are in the data structures at the top of the file, I think this code would be better at the top-level right after the declaration of `ARCHIVE_FILES`. You could keep your existing conditional here, but then just do
`ARCHIVE_FILES['common'].append(asan_dll)`.
I guess we haven't needed conditional test package contents like this in this file before.

Comment on attachment 8845794[details]Bug 1333003 part 6 - Fix test scripts to run ASan on Windows.
https://reviewboard.mozilla.org/r/118942/#review124918
::: build/automation.py.in:237
(Diff revision 3)
> env.setdefault('R_LOG_LEVEL', '6')
> env.setdefault('R_LOG_DESTINATION', 'stderr')
> env.setdefault('R_LOG_VERBOSE', '1')
>
> # ASan specific environment stuff
> - if self.IS_ASAN and (self.IS_LINUX or self.IS_MAC):
> + if self.IS_ASAN:
I don't think any of our test harnesses are still using automation.py, just leave this part out of the patch. (We haven't removed it because it's still used by remoteautomation.py for Android tests.)
::: testing/mozbase/mozrunner/mozrunner/utils.py:172
(Diff revision 3)
> log.info("TEST-UNEXPECTED-FAIL | runtests.py | Failed to find"
> " ASan symbolizer at %s" % llvmsym)
>
> # Returns total system memory in kilobytes.
> - # Works only on unix-like platforms where `free` is in the path.
> + if mozinfo.isWin:
> + totalMemory = int(
I probably would have used ctypes to call something like `GetPhysicallyInstalledSystemMemory` ( https://msdn.microsoft.com/en-us/library/windows/desktop/cc300158(v=vs.85).aspx ), but I guess this is similar to what's already here.
::: testing/xpcshell/runxpcshelltests.py:945
(Diff revision 3)
>
> usingASan = "asan" in self.mozInfo and self.mozInfo["asan"]
> usingTSan = "tsan" in self.mozInfo and self.mozInfo["tsan"]
> if usingASan or usingTSan:
> # symbolizer support
> - llvmsym = os.path.join(self.xrePath, "llvm-symbolizer")
> + llvmsym = os.path.join(
Can you file a followup to make runcppunittests.py and runxpcshelltests.py use something from mozrunner.utils here instead of duplicating this code? It looks like we already have the code they need in `test_environment`, but maybe we need to split that out into smaller functions for these harnesses to use.

(In reply to Ting-Yu Chou [:ting] from comment #44)
> Try now has many timed out tests which I can't reproduce locally, for
This will be followed up in bug 1349420 as it is not related to the patches here.

Comment on attachment 8851440[details]Bug 1333003 part 8 - Include ASan runtime dll and LLVM symbolizer in jsshell package.
https://reviewboard.mozilla.org/r/123728/#review127082
::: toolkit/mozapps/installer/upload-files.mk:79
(Diff revision 1)
> ifdef WIN_UCRT_REDIST_DIR
> JSSHELL_BINS += $(notdir $(wildcard $(DIST)/bin/api-ms-win-*.dll))
> JSSHELL_BINS += ucrtbase.dll
> endif
>
> +ifeq (11,$(MOZ_ASAN)$(CLANG_CL))
You can just use ifdef MOZ_CLANG_RT_ASAN_LIB_PATH here.
This makes me realize we should be using the same variable in browser/installer/package-manifest.in instead of using a wildcard there. Could you do that while you're there? (You'll need to add a DEFINE in browser/installer/Makefile.in for that)

If you are adding stuff to the tree which is under its own license, you should certainly add documentation to indicate what that license is. If it's a binary blob, you should say where to find the source code.
Does the answer the question?
Gerv

(In reply to Gervase Markham [:gerv] from comment #122)
> If you are adding stuff to the tree which is under its own license, you
> should certainly add documentation to indicate what that license is. If it's
> a binary blob, you should say where to find the source code.
>
> Does the answer the question?
>
> Gerv
I'm not adding stuff to the tree but some binaries to the archives that we generate and can be public downloaded. Don't know if that makes any differences.
Though I don't find similar documents in firefox package for llvm binaries either (browser/installer/package-manifest.in). Because of this, I'll land the patches here for now, and file a follow up bug for the documentation.

(In reply to Ting-Yu Chou [:ting] from comment #123)
> Though I don't find similar documents in firefox package for llvm binaries
> either (browser/installer/package-manifest.in). Because of this, I'll land
> the patches here for now, and file a follow up bug for the documentation.
Filed bug 1354355.

(In reply to Jordan Lund (:jlund) from comment #10)
> c. make it so it doesn't run on any tree by default but you can still
> specify it in try message via `try: -b o -p win32-asan` by adding to the
> stanza: `run-on-projects: []`
> * note: you could put `run-on-projects: try` in your local copy but
> if you are going to land on inbound and you don't want this win asan to run
> on every `try -b all` push, leave it at `run-on-projects: []`
I did this because I intend to have win64-asan only on try, but still the builds/tests are always done for inbound and central. Did I miss anything or what else could go wrong?

(In reply to Dustin J. Mitchell [:dustin] from comment #20)
> Also, please use the `mach taskgraph` subcommands to double-check that this
> is not going to introduce these tasks by default anyway. I don't remember
> if the test jobs (which aren't tagged with `run-on-projects: []`) will run
> and pull in the build jobs they depend on.
Indeed, that's exactly what's happening here. The target task selection phase is not selecting the build tasks. However, it is selecting the test tasks, and those require the builds, so it is performing the builds anyway.
Fixing this is a little ugly -- in taskcluster/ci/test/tests.yml there are `run-on-projects` defined for each test suite. They're already broken out for a number of platforms in most cases (`by-test-platform`). You could accomplish the exclusion by adding `windows10-64-asan/opt` to the alternatives for each one, with value `[]`.