The following is a list of questions (and, hopefully, answers) regarding the "Vancouver Prospectus", the proposal for Debian release management intended to take effect for etch (sarge+1), and assembled by the Debian release managers and archive administrators at a meeting in Vancouver, Canada in early March 2005.

Why is the permitted number of buildds for an architecture restricted to 2 or 3?

Steve Langasek seems to be saying the arch would be too slow to usefuly run many of the packages, would require more trust, slow down RC fix propigation... see reference 1 (ddaniels)

What about considering cross compiling or using emulators in order to have a fast buildd? That would address everything except "too slow to run packages", and would only introduce "buggy compile". "Buggy compile" is a problem for real hardware too though. Aj's comments about emulation are at reference 9. vorlon says he'd be for moderate distcc usage in reference 10. (ddaniels)

"So that the buildds can be maintained by Debian, which means that they can be promptly fixed for system-wide problems, and which means access to them can be controlled, rather than opening up users of that architecture to exploits should a random disgruntled non-developer have access to the machine and decide to abuse it." aj, reference 8. (ddaniels)

How is it that none of the four architectures to be released with etch (i386, powerpc, ia64, amd64) have the bare minimum 2 buildds, and yet all are still considered releasable? (N+1 buildds are required; presumably N is 1 rather than 0, since most developers will not upload a binary package for each of the four architectures every time they release a new source package.)

It seems that the minimum already includes one redundant machine. Note that the four architectures were just a projection. (ddaniels)

How will it be determined if a newly proposed architecture has a large enough user base to consist of 10% of all mirror downloads before that new architecture is actually added to the mirrors?

"Stuff'll be transferred from scc to ftp.d.o once it gets enough mirror usage; that's awkward, and will probably be done in daily, half-gig stages over two weeks or so." "amd64'll probably just be assumed" and "powerpc isn't remotely near popular enough to qualify." aj, reference 7 (ddaniels)

Three bodies (Security, System Administration, Release) are given independent veto power over the inclusion of an architecture.

Does the entire team have to exercise this veto for it to be effective, or can one member of any team exercise this power effectively?

It seems to be the entire team that would need to veto as it's a concern that there won't be enough man hours to do the work. Iirc somone said it was a "common sence" no one to support it = veto. Steve Langasek said (reference 1) "It's expected that each team would exercise that veto as a team, by reaching a consensus internally." (ddaniels)

Is the availability of an able and willing Debian Developer to join one of these teams for the express purpose of caring for a given architecture expected to mitigate concerns that would otherwise lead to a veto?

I would expect so, but the time for training needs to be available, and seen worthwhile. I suspect that someone would have to demonstrait their devotion to the team's work (like Jorg did?) and then arange to be trained. (ddaniels)

How often can/should these bodies be petitioned for a reconsideration of their veto in light of underlying changes in circumstance?

I'd guess when it's obvious that the underlying circumstances have changed. E.g. when a new member joins a team, or when opening r-c bug allowed into testing.... (ddaniels)

At most once a month (reference 1) (ddaniels paraphrasing vorlon)

How will the exercise of a veto be communicated to the Project?

I'm guessing it'd be first through the porters, then possibly d-d, and maybe even d-d-a. I'd hope it'd be first porters, and then d-d-a. (ddaniels)

The guidelines for eligibility as a released or mirrored architecture, and for inclusion in SCC, could be initially met, but later fail. For example, an architecture could fall below the 98% up-to-date mark. Does this spell automatic expulsion from the slate of releasable architectures? Similarly, for how long are the petititions for inclusion in SCC (5 developers and 50 users) assumed to remain valid?

I'm guessing the biggest hold backs would be man hours for the teams, and archive space. So that would mean when space and/or man hours runs low they'd be looking to drop architectures. When space and/or man hours are available they'd be more receptive to adding architectures. (ddaniels)

"hand-holding" buildd's (mostly was a problem for kernels and shouldn't be for sarge)

Setting up and maintaining buildd's

Waiting for slow arch's to build (just be a lucky side effect)

RC-bug fix "hand-holding" on buildd's (Alternatives include leaving it to the package maintainers, asking for help...)

Reduce work for the testing migration scripts (britney) See reference 2.

How will SCC releases be made? CD image only, scc.d.o only, not at all, added when ready to the mirror network? (ddaniels)

With the tier-1 release if pre-freeze tracked, and few RC bugs (according to reference 3)packages will sit on the scc mirror network according to reference 4. (ddaniels)

If the freeze is missed "they'll be able to simply stop autobuilding unstable, fix any remaining problems that are a major concern, and request a snapshot be done." aj, reference 5. They could also build against testing, or something else (reference 6). The decision is that of the porters (reference 6)(ddaniels)

Will tier-1 releases (i.e. etch) block uploads for scc RC fixes? If changes needed for scc's are blocked then, will the Stable Release Managers accept package changes which are just intended to support a scc or other architecture? (ddaniels)

I'm guess it's a package maintainer decision that could risk it not being included. I'd imagine the release teams will continue to allow RC fixes even if it is arch specific. (ddaniels)

Does VancouverProspectus solve other DebianRelease problems, such as quasi-exponential growth of packages? (EGallego)

Exluding a "universe" package from a release due to bugs is easier than exlucing an architecture. I'd imagine lots of space would have to be available for archive growth and that removing extra architecures has done this. Some numbers might be nice though. (ddaniels)

What would be useful (to the VancouverProspectus) metrics to develop? How can they be developed? (ddaniels)

Non-useful packages could be ignored for release criteria (a not-for-us for testing)

A survey of architectures to see their processing speed (max and average)

A way to measure if programs are usable (arbitrary time to load or do something, database of users). This could be used for temporarily ignoring problems in arch problems for not-usable packages.

A survey of architectures to see available RAM (max and average)

A way to measure memory usage of a program (automated memory profiling?)

A survey of architecutres to find other limitations (disk size...) (max and average). This could be compared to the unpacked package size.