Kellogg brings bold ideas to the table, and we gather the people who can affect change. The world knows us for combining the power of analytics and people. This is what we teach. This is how we equip leaders to think bravely.

Whichever program you choose, you will enjoy an unparalleled education, taught by our exceptional faculty and grounded in the unique Kellogg culture. Regardless of the path, your destination remains the same: a world-class management education.

Kellogg offers courses, such as Advanced Management Programs, to help professionals improve leadership, strategic and tactical skills and develop cross-functional understanding of organizations. Learn to overcome new challenges in a dynamic environment, to scale and work effectively on a global platform, and to build a common leadership culture.

Kellogg prepares you to meet the challenges of the global economy with an expansive, fully informed view of the world along multiple dimensions: through our curriculum, the diversity of our faculty and student body, and through our global presence. Prepare here to succeed anywhere.

The global economy is changing rapidly. Innovations and new methods of collaboration are expanding every day. These changes require leaders to think in new ways and understand the real-world application. Kellogg is at the forefront.

From day one, Kellogg students become part of a global network of 55,000 entrepreneurs, innovators and experts across every conceivable industry and endeavor. Our alumni exemplify excellence in management. They represent the advantage of the Kellogg experience.

Ronen Gradwohl joined the MEDS department at the Kellogg School of Management in 2009 after obtaining his Ph.D. in computer science and applied mathematics from the Weizmann Institute of Science. Professor Gradwohl's research focuses on strategic interactions in environments characterized by uncertainty about timing and communication, by the presence of irrational players, and by the possibility of collusion. His current research projects include a study of the properties of equilibria in such environments and the design of mechanisms in a distributed setting with faults. Additionally, he is interested in the design of distributed and cryptographic protocols resilient against both rational and adversarial manipulation. Most recently his research has focused on issues of privacy and transparency.

In this work we introduce the notion of partial exposure, in which the players of a simultaneous-move Bayesian game are exposed to the realized types and chosen actions of a subset of the other players. We show that in large simultaneous-move game, each player has very little regret even after being partially exposed to other players. If players are given the opportunity to be exposed to others at the expense of a small decrease in utility, players will decline this opportunity, and the original Nash equilibria of the game will survive.

In a function that takes its inputs from various players, the effect of a player measures the variation he can cause in the expectation of that function. In this paper we prove a tight upper bound on the number of players with a large effect, a bound that holds even when the players' inputs are only known to be pairwise independent. We also study the effect of a set of players, and show that there always exists a "small" set of players that, when eliminated, leaves every small set with little effect. Finally, we ask whether there always exists a player with positive effect, and show that, in general, the answer is negative. More specifically, we show that if the function is nonmonotone or the distribution is only known to be pairwise independent, then it is possible that all players have zero effect.

We consider cryptographic and physical zero-knowledge proof schemes for Sudoku, a popular combinatorial puzzle. We discuss methods that allow one party, the prover, to convince another party, the verifier, that the prover has solved a Sudoku puzzle, without revealing the solution to the verifier. The question of interest is how a prover can show: that there is a solution to the given puzzle, and that he knows the solution, while not giving away any information about the solution to the verifier. In this paper we consider several protocols that achieve these goals. Broadly speaking, the protocols are either cryptographic or physical. By a cryptographic protocol we mean one in the usual model found in the foundations of cryptography literature. In this model, two machines exchange messages, and the security of the protocol relies on computational hardness. By a physical protocol we mean one that is implementable by humans using common objects, and preferably without the aid of computers. In particular, our physical protocols utilize items such as scratch-off cards, similar to those used in lotteries, or even just simple playing cards. The cryptographic protocols are direct and efficient, and do not involve a reduction to other problems. The physical protocols are meant to be understood by lay people and implementable without the use of computers.

In this note we prove a large deviation bound on the sum of random variables with the following dependency structure: there is a dependency graph G with a bounded chromatic number, in which each vertex represents a random variable. Variables that are represented by neighboring vertices may be arbitrarily dependent, but collections of variables that form an independent set in G are t-wise independent.

We consider the problem of random selection, where p players follow a protocol to jointly select a random element of a universe of size n . However, some of the players may be adversarial and collude to force the output to lie in a small subset of the universe. We describe essentially the first protocols that solve this problem in the presence of a dishonest majority in the full-information model (where the adversary is computationally unbounded and all communication is via non-simultaneous broadcast). Our protocols are nearly optimal in several parameters, including the round complexity (as a function of n), the randomness complexity, the communication complexity, and the tradeoffs between the fraction of honest players, the probability that the output lies in a small subset of the universe, and the density of this subset.

Gradwohl, Ronen, Guy Kindler, Omer Reingold and Amnon Ta-Shma. 2005. On the Error Parameter of Dispersers. Proceedings of the 2005 International Workshop on Randomization and Computation. 9: 294-305.

Optimal dispersers have better dependence on the error than optimal extractors. In this paper we give explicit disperser constructions that beat the best possible extractors in some parameters. Our constructions are not strong, but we show that having such explicit strong constructions implies a solution to the Ramsey graph construction problem.

A Nash equilibrium is an optimal strategy for each player under the assumption that others play according to their respective Nash strategies. In the presence of irrational players or coalitions of colluding players, however, it provides no guarantees. Some recent literature has focused on measuring the potential damage caused by the presence of faulty behavior, as well as designing mechanisms that are resilient against such faults. In this paper we show that large games are naturally fault tolerant. We first quantify the ways in which two subclasses of large games -- ?-continuous games and anonymous games -- are resilient against Byzantine faults (i.e. irrational behavior), coalitions, and asynchronous play. We then show that general large games also have some non-trivial resilience against faults.

In most implementation frameworks agents care only about the outcome, and not at all about the way in which it was obtained. Additionally, typical mechanisms for full implementation involve the complete revelation of all private information to the planner. In this paper we consider the problem of full implementation with agents who may prefer to protect their privacy and reveal as little of their private information as possible. We analyze the extent to which privacy-protecting mechanisms can be constructed under various assumptions about agents' predilection for privacy and the permissible game forms.

In generic perfect-information games the unique Subgame-Perfect Equilibrium (SPE) outcome is identical to the one predicted by several rationalizability notions, like Extensive-Form Rationalizability (EFR), the Backward Dominance Procedure (BDP), and Extensive-Form Rationalizability of the Agent form (AEFR). We show that, in contrast, within the general class of perfect information games all these solution concepts are genuinely distinct in terms of the outcomes that they predict: SPE is more restrictive than EFR, which is in turn more restrictive than BDP, which is, ?nally, more restrictive than AEFR.

We study rationality in protocol design for the full-information model, a model characterized by computationally unbounded adversaries, no private communication, and no simultaneity within rounds. Assuming that players derive some utility from the outcomes of an interaction, we wish to design protocols that are faithful: following the protocol should be an optimal strategy for every player, for various definitions of "optimal" and under various assumptions about the behavior of others and the presence, size, and incentives of coalitions. We first focus on leader election for players who only care about whether or not they are elected. We seek protocols that are both faithful and resilient, and for some notions of faithfulness we provide protocols, whereas for others we prove impossibility results. We then proceed to random sampling, in which the aim is for the players to jointly sample from a set of m items with a distribution that is a function of players' preferences over them. We construct protocols for m>2 that are faithful and resilient when players are single-minded. We also show that there are no such protocols for 2 items or for complex preferences.

We argue that in distributed mechanism design frameworks it is important to consider not only rational manipulation by players, but also malicious, faulty behavior. To this end, we show that in some instances it is possible to take a centralized mechanism and implement it in a distributed setting in a fault tolerant manner. More specifically, we examine two distinct models of distributed mechanism design --- a Nash implementation with the planner as a node on the network, and an ex post Nash implementation with the planner only acting as a "bank". For each model we show that the implementation can be made resilient to faults.

Gradwohl, Ronen. 2008. "Price Variation in a Bipartite Exchange Network." In proceedings of 1st International Symposium on Algorithmic Game Theory, vol. 1, SAGT.

We analyze the variation of prices in a model of an exchange market introduced by Kakade et al. [11], in which buyers and sellers are represented by vertices of a bipartite graph and trade is allowed only between neighbors. In this model the graph is generated probabilistically, and each buyer is connected via preferential attachment to v sellers. We show that even though the tail of the degree distribution of the sellers gets heavier as v increases, the prices at equilibrium decrease exponentially with v . This strengthens the intuition that as the number of vendors available to buyers increases, the prices of goods decrease.

Full-Time / Part-Time MBA

Business Analytics I (DECS-430-B) This course is equivalent to the MMM core course MMM Business Analytics (DECS-440)
Analytics is the discovery and communication of meaningful patterns in data. This course will provide students with an analytics toolkit, reinforcing basic probability and statistics while throughout emphasizing the value and pitfalls of reasoning with data. Applications will focus on connections among analytical tools, data, and business decision-making.
Students are required to complete the Business Analytics Prep Course