AJaeger, mordred if either of you have a minute, i'm getting post failures on glance_store experimental functional gates, i'm probably doing something stupid in .zuul.yaml that you will hopefully notice right away: https://review.openstack.org/#/c/526956/13

dmsimard: identifying "inactivity" is non-trivial (what variety of activity do you monitor? just lack of new changes/releases, or reviews and bugs going unaddressed or failure to hold meetings or dead irc channels or lack of communication specific to that project on mailing lists, or...)

infra-root: nodepool.yaml changes for feature/zuulv3 builders^. I also like to discuss how we can manage nodepool configs moving forward too. Maybe not right now, but now we have 3 different nodepool yaml files and things like our diskimages and providers sections now are copy and pasted between them. i think supporting something like nodepool.d (split configure) or starting to template it via

mtreinish: also from a presenter notes standpoint, it's helpful to mention that the vacant triangle is simply indicating values which would be above the available network bandwidth of the system under test

corvus: I'd definitely give that a try if we had data for all subscriber counts for each number of publisher. But, we were hitting up against node bandwidth limits in our tests and couldn't scale both past a certain point. Which is where that diagonal comes from in the graphs

pabelanger: if we split it into base config (zk connection details), config for each provider so they can be composed where necessary, then label and diskimage config we would probably be good. Another option is to have the launchers stop being distinct sets and have them overlap then we can use the same config everywhere

fungi: clarkb: I think it is fine, but I did hear some feedback that people didn't know infra was in the shared room on monday/tuesday for getting support. So maybe this time around we need to over communicate that? I know a few projects shows up on wed/thurs/fri to talk design stuff

pabelanger: ya, though last time I specifically sent email to the dev list re specific topics that would be perfect for the helproom days like tripleo and that was ignored :/ so not sure how to communicate that better

persia: ya I think the shared room with infra, qa, release, requirements etc does sort of force other projects to be assertive to sort out whats what to get what they need. Making that easier would like help

clarkb: Also forces out-of-room time for folk who want to have a meeting. I remember release team especially doing this a couple times, where they didn't want to disturb others (so just were not there).

clarkb: Maybe not time for Dublin, but maybe in future have an infra rep go to a weekly meeting for all the other attending projects and ask "When is someone from that project coming to the infra room?" or "When should someone from infra come to the project room?"

pabelanger: do you know what the status of ansible linting on our zuul configs is? this is last job related item on zuulv3-issues etherpad. YOu had a change that you abandoned because you thought it was done somewhere else?

AJaeger: putting it on my review list likely won't help much (since it's already on there, along with a couple hundred other repos i rarely get a chance to check on) but i'll try to remember to up my priority for it

oh, iirc we had to revert because inbetween two reverts more people had started using the (deprecated) syntax and then when we wanted to look, hound wasn't indexing new repos and down the rabbithole we went

and change 526955 followed by change 535434 pushed into gerrit as a dependent set of git commits is something we've tended to refer to as a "change series" though i don't know whether that's a real gerrit term

and yeah, the per-ip-address blocking we do to avoid dos situations happens once you reach 100 concurrent connections and isn't on right now because we've rebooted for a new kernel and https://review.openstack.org/529712 has yet to get approvd