In Bayesian hypothesis testing, a decision is made based on a prior probability distribution over the hypotheses, an observation with a known conditional distribution given the true hypothesis, and an assignment of costs to different types of errors. In a setting with multiple agents and the principle of "one person, one vote", the decisions of agents are typically combined by the majority rule. This thesis considers collections of group hypothesis testing problems over which the prior itself varies. Motivated by constraints on memory or computational resources of the agents, quantization of the prior probabilities is introduced, leading to novel analysis and design problems. Two hypotheses and three agents are sufficient to reveal various intricacies of the setting. This could arise with a team of three referees deciding by majority rule on whether a foul was committed. The referees face a collection of problems with different prior probabilities, varying by player. This scenario illustrates that even as all referees share the goal of making correct foul calls, opinions on the relative importance of missed detections and false alarms can vary. Whether cost functions are identical and whether referees use identical quantizers create variants of the problem. When referees are identical in both their cost functions and their quantizers for the prior probabilities, it is optimal for the referees to use the same decision rules. The homogeneity of the referees simplifies the problem to an equivalent single-referee problem with a lower-variance effective noise. Then the quantizer optimization problem is reduced to a problem previously solved by Varshney and Varshney (2008). Centroid and nearest-neighbor conditions that are necessary for quantizer optimality are provided. On the contrary, the problem becomes complicated when variations in cost functions or quantizers are allowed. In this case, decision-making and quantization problems create strategic form games; the decision-making game does always have a Nash equilibrium. The analysis shows that conflict between referees, in the form of variation in cost functions, makes overall team performance worse. Two ways to optimize quantizers are introduced and compared to each other. In the setting that referees purely collaborate, in the form of having equal cost functions, the effect of variations between their quantizers is analyzed. It is shown that the referees have incentive to use different quantizers rather than identical quantizers even though their cost functions are identical. In conclusion, a diverse team with a common goal performs best.