Abstract: I'll share three vignettes involving Avi, the permanent of an n*n matrix, and me.

The first vignette involves a talk by Avi that I attended while still an undergrad, at which he explained the remarkable procedure called Sinkhorn scaling, and its use for approximating the permanent. A year later, I used Sinkhorn scaling (and a different function inspired by the permanent) to construct a hidden-variable interpretation of quantum mechanics with some surprising properties.

The second vignette involves the remarkable self-correcting abilities of the permanent, and their role in proving non-relativizing results. I'll explain how, in 2007, careful consideration of those abilities led me and Avi to discover the algebrization barrier. I'll then discuss some still-unpublished work which shows that, contrary to a common misconception, what fails to relativize about the permanent is not its random self-reducibility (or even random self-reducibility combined with downward self-reducibility), but just its self-checkability.

The third vignette involves BosonSampling: a quantum computing proposal by myself and Alex Arkhipov which has now seen numerous experimental implementations, and was inspired by a joke that Avi made in that same 2000 lecture -- a joke about bosons needing to work harder than fermions, since the former are governed by permanents and the latter merely by determinants. I'll mention some open problems about the permanent that arise naturally in BosonSampling, as well as some recent results by my students Daniel Grier and Luke Schaeffer.

Abstract: While the jury is still out as to whether the impressive experimental progress on quantum gates and qubits will lead one day to a full scale quantum computing machine, a new and exciting development had been taking place over the past decade. Computational notions such as reductions, hardness, and completeness are quickly starting to be integrated into the very heart of the research of many body quantum systems. The computational perspective brings deep new insights into physical questions that seem completely unrelated to computers, spanning precision measurements and sensing, testing quantum mechanics, condensed matter physics and even black holes and quantum gravity. I will try to explain some of these intriguing connections, and time permitting, will ponder about what next.

Abstract: Graphs with unexpected properties often lead to interesting results in Information Theory. I will describe several examples, focusing on a very recent one whose analysis, following the tradition of Avi Wigderson, combines tools from several areas including Graph Theory, Communication Complexity and Additive Number Theory.

Abstract: Matrix scaling is a simple yet powerful technique used to balance the entries of a given matrix so that each non zero entry is not too small or too large in absolute value. In this talk I will outline the connection between matrix scaling and another scaling method arising in the proof of Brascamp-Lieb inequalities and how both can be viewed as special cases of the same theorem. This talk is based on several joint works with Avi (and many others) in recent years.

Abstract: This talk is based on joint works with Avi and with Avishay Tal.

Its starting point is the conjecture that multilinear functions require larger constant-depth Boolean circuits than linear functions (i.e., parity). Specifically, it is conjectured that, for every $t$, some explicit $t$-linear function, with $n$ variables in each of the $t$ blocks, require depth-three circuits of size $\exp(tn^{t/(t+1)})$. It is then suggested to test this conjecture by studying canonical Boolean circuits that are obtained from multi-linear circuits having general (multi-linear) gates of bounded arity, where the size of the canonical circuit is exponential in the gate-arity and the number of gates of such multi-linear circuits.

The talk will focus on the study of the model of multi-linear circuits with general gates. It will survey tight lower and upper bounds for general $t$-linear functions as well as non-trivial lower bounds for some explicit functions, where non-trivial lower bounds are such that significantly improve over $sqrt(n)$, which is the complexity of parity in this model. Establishing the latter lower bound is related to providing new results on the rigidity of semi-explicit matrices (e.g., random Toeplitz matrices).

Abstract: Probabilistic algorithms for both decision and search problems can offer significant complexity improvements over deterministic algorithms. One major difference, however, is that they may output different solutions for different choices of randomness. This makes correctness amplification impossible for search algorithms and is less than desirable in setting where uniqueness of output is important such as generation of system wide cryptographic parameters or distributed setting where different sources of randomness are used.

Pseudo-Deterministic (PSD) algorithms are a class of randomized search algorithms, which output a unique answer with high probability. Intuitively, they are indistinguishable from deterministic algorithms by an polynomial time observer of their input/output behavior.

In this talk I will describe what's known about pseudo deterministic algorithms. In particular, a new parallel PSD algorithm for finding perfect matchings in bipartite graphs.

Abstract: Avi is a wonderful mathematician, full of imagination, vision and power, equipped with a special access to a hidden resource: the theory of computing (TOC). Witnessing Avi and fellow TOC people changing the world and mathematics in particular was a great pleasure. The lecture will touch on happy days of friendship, joint ventures, collaboration, and debates with the Wigdersons, with special emphasis on discrete geometry.

Abstract: A graph is given, together with a color assigned to each vertex. Many vertices may receive the same color. We consider the NP-hard problem of finding a connected subgraph with a minimum number of vertices, such that the subgraph contains at least one vertex of each color. In particular, we are interested in perfect solutions, in which no color is repeated. Versions of the problem arise in the context of protein-protein interaction networks, social networks and sensor networks. The problem (or even a generalization in which the edges of the graph are weighted) can be solved by dynamic programming in time polynomial in the size of the graph but exponential in the number of colors. It can also be represented by an integer program with polynomial-bounded numbers of variables and linear constraints. We present a simple fast heuristic algorithm, explain the rationale behind its design, and describe its performance on large 2-dimensional grid graphs, under various specifications of the number of colors and their frequency distribution, using a random model and a semi-random model.

In the random model the color assignment is chosen uniformly at random among assignments with the given frequency distribution. The algorithm reliably gives near-perfect solutions, provided the distribution of color frequencies is not highly skewed.

In the semi-random model a random perfect solution is planted, and the completion of the color assignment is random. Regardless of the frequency distribution the algorithm reliably produces perfect solutions, but not usually the planted one. In this case we extend the algorithm to generate many perfect solutions, and report on its performance.

Abstract: I will first review some of the many transitions through which Avi went in life. I will then switch gears and speak about phase transitions in simplicial complexes. Perhaps the most famous discovery of Erdos and Renyi is the phase transition that G(n,p) graphs undergo around p=1/n. In the last decade, together with colleagues and students, we have been investigating random simplicial complexes, and now we can finally tell the story of the phase transition in simplicial complexes at dimensions d \ge 2 (graphs being the case d=1). As usual in high dimensional combinatorics, many fascinating new phenomena reveal themselves, as I will try to describe.

Abstract: Avi was once a graduate student at Princeton. I thought I would recall his first paper and how it happened. I would also explain what it was like to have Avi around—before he became world famous.

Abstract: The "generic" meaning of the phrase "generic" is a choice of parameters so that they don't satisfy any algebraic relation that they don't have to, and which is important to avoid in the proof. This can be achieved by choosing them algebraically independent. (While this sounds very technical and impractical, a practical way of handling the genericity assumption is choosing independent random values for the parameters.) A generic choice of the parameters often turns geometric and algebraic questions into combinatorial ones. Besides algebraic independence, there are several degrees of genericity, with different combinatorial consequences. In this talk we illustrate this idea by some old and new applications of generic choice in graph theory and related areas.

Abstract: Expander graphs have played an important role in computer science in the last 4 decades and have been, probably, the most fruitful interaction between math and CS with applications in both directions. Already in the 80's, Wigderson and Friedman looked for an analogous high dimensional theory. But only in recent years an elaborate theory have started to emerge from a number of different directions in CS and math: the theory of random simplical complexes, geometric and topological overlapping properties, property testing, Ramanujan complexes and more. We will describe these developments and present some open problems and directions for further research.

Abstract: A public ledger is a tamperproof sequence of data that can be read and augmented by everyone. Public ledgers have innumerable and compelling uses. They can secure, in plain sight, all kinds of transactions –such as titles, sales, and payments– in the exact order in which they occur.

Public ledgers prevent corruption and enable very sophisticated applications – such as cryptocurrencies and smart contracts. They stand to revolutionize the way a democratic society operates. As currently implemented, however, they scale poorly and cannot achieve their enormous potential.

Algorand is a new, truly democratic, and very efficient way to construct and manage a public ledger. Unlike prior approaches based on proof of work, Algorand requires a negligible amount of computation, and generates a transaction history that will not “fork” with overwhelmingly high probability.

Abstract: Avi has long been a forceful advocate of the "depth through breadth" paradigm for Theoretical Computer Science: the view that notions of complexity appear in wide-ranging fields, and that studying these breadth of scenarios can deepen our understanding of many core issues. This talk will attempt giving an example where complexity rears its head in the field of economic theory.

As economic systems "move" to the Internet, they can become much more complex and this new complexity often becomes their defining characteristic. We will consider a very simple scenario of this form: a single seller that is selling multiple items to a single buyer. We will discuss the question of how *complex* must the pricing scheme be in order for the seller to maximize (approximately, at least) his revenue.

Based on joint works with Sergiu Hart, with Shaddin Duhgmi and Li Han and with Moshe Babioff and Yannai Gonczarowski.

Abstract: In 2000-01 the School of Mathematics ran a very successful Special Year on сomputational сomplexity, and proof complexity was one of its central themes. Quite a few results proved during that year or at least directly influenced by its activities were instrumental in shaping propositional proof complexity as we know it today. In this mostly survey talk we will try to convey at least some impression about state of the art in this dynamic area. We will particularly emphasize its parts like resolution complexity and space complexity that most directly relate to Avi's work of that period.

Abstract: We discuss several results that share a similar structure and been directly inspired by the zig-zag construction of expander graphs [Reingold Vadhan Wigderson 2000]. Specifically, we will consider the proof that undirected connectivity is computable in deterministic logarithmic space [Reingold 2005], Dinur’s proof of the PCP Theorem [Dinur 2005], the construction of constant-round interactive proofs for delegating computation [Reingold Rothblum Rothblum 2016] and high-rate locally-correctable and locally-testable codes with sub-polynomial query complexity [Kopparty Meir Ron-Zewi Saraf 2016].

The focus of the talk will not be on the details of each construction but rather on common motifs. Are there take-home lessons to apply in other settings? Is there a recipe for “zig-zag like constructions/proofs”? We will argue that such a recipe is already hinted in centuries-old Jewish scriptures.

Abstract: The population recovery problem is an appealing idealized problem of learning in the presence of noise that was proposed in a 2012 paper
of Avi with Zeev Dvir, Anup Rao and Amir Yehudayoff (DRWY). In this problem we have an unknown distribution D on binary strings of length n and our goal is to estimate the probability D(s) of a particular string s with some small additive error. We observe samples taken from the distribution, but the catch is that each sample is randomly corrupted. What this means is that for each sample, there is a process that randomly selects each coordinate independently with probability 1-p, for some p in (0,1). In the lossy version of the problem each selected bit is replaced by "?" and in the noisy version, each selected coordinate is replaced by a random bit.

DRWY asked whether, assuming a known upper bound k on the size of the support of D, whether for each fixed p>0, there is an algorithm that estimates D(s) (in either the lossy or noisy version) in time polynomial in n, k and 1/b (where b is the allowed additive error). For the lossy version,
this was shown in a paper by Ankur Moitra and myself, and for the (harder) noisy version, this was shown in a recent joint work of mine with Anindya De and Sijian Tang.

The solution of the noisy version builds on the lossy version, and previous work of DRWY, of Wigderson and Yehudayoff, and of Lovett and Zhang, and involves a number of techniques: linear programming duality, complex analysis, and discrete fourier analysis. In this talk I'll survey this work and give some hints of the proof.

Abstract: A celebrated result by Impagliazzo and Wigderson is that under complexity theoretic worst-case hardness assumptions, every randomized algorithm can be transformed into one that uses only logarithmically many bits (with polynomial slowdown). Such algorithms can then be completely derandomized (with polynomial slowdown). In the talk I will discuss recent work attempting to extend this approach to:

1. Randomized algorithms that err with probability $1-\epsilon$ for small $\epsilon$. (Here, the goal is to minimize the number of random bits/slowdown as a function of $\epsilon$).

2. Known SAT-solving randomized algorithms. (Here, polynomial slowdown is a deal breaker as it gives trivial
algorithms that run in super exponential time).

3. Randomized algorithms that sample from probability distributions. (Here, the goal is to sample a statistically-close distribution using only few random bits, and polynomial slowdown).

This will give me a chance to revisit some of the seminal contributions of Avi and co-authors to this area.

Based on joint work with Artemenko, Impagliazzo and Kabanets, and on joint work with Applebaum, Artemenko, and Yang.

Abstract: I will speak mostly about my battle with polynomials: how to correct errors, how to test functions for their degree, and the role these questions have played in complexity theory. Avi played a very influential role in my involvement with these problems --- so it seems reasonable that he should be subjected to one more talk on this subject from me.