Invited Talks

Tuesday, May 29, 2018

10:45am Meeting room 302 (Mountain View), level 3

Marco Campion, PhD Student, University of Verona, Italy

Indexed Grammars Abstractions and Relations with other Formal Languages

Abstract:

Indexed Grammars (IG) are a formalism which generate Indexed Languages (IL) and can be recognized by nested stack automata. Indexed grammars are a generalization of context-free grammars in that non-terminals are equipped with lists of index symbols and they are a proper subset of context-sensitive languages. Nested stack automata differ from pushdown automata: the memory stack can be nested, that is, each position in the stack can contain other stacks. The automaton always operates on the innermost stack only. Indexed languages resemble context-free languages in that many of the closure properties and decidability results for both classes of languages are similar. Part of this interest in larger classes of languages stems from the inadequacy of finite state automata and context-free grammars in specifying all of the code structures of all possible metamorphic change of a metamorphic code.
We give a least-fix-point characterization of the languages recognized by indexed grammars. We then study relations between the indexed languages and other formal languages contained and containing them. More precisely, we found that there is no best abstraction, in the sense of abstract interpretation, function from indexed languages to context-free languages and from context-sensitive languages to indexed languages. This means that there will not be a better abstract function that will allow us to map any indexed language to a context-free language with the smallest loss of information possible. This result does not exclude possibility of approximations: limitations to the structures of productions or memory of the indexed grammars are studied. We the propose some widening operators to overcome these limitations.

Thursday, May 24, 2018

2:45pm Meeting room 302 (Mountain View), level 3

Georg Fuchsbauer, Associate Research Professor, INRIA

Subversion-resistant zero knowledge

Abstract:

Zero-knowledge proofs allow a prover to convince a verifier of the validity of a statement without revealing anything else. Non-interactive zero-knowledge (NIZK) proofs are a central concept in cryptography, which relies on parameters that must be set up in a trusted way. Motivated by the subversion of “trusted” public parameters in mass-surveillance activities, we study the security of NIZK proofs in the face of parameter subversion. We investigate which security properties of NIZK proofs can be salvaged when the parameters are set up maliciously.
We then turn to SNARKs, which are proof systems with short and efficiently verifiable proofs. Motivated by outsourcing of computation, they let an untrusted server attach a short proof that the result was computed correctly. Zero-knowledge SNARKs are today used e.g. in anonymous cryptocurrencies such as Zcash. We prove that many ZK-SNARK schemes proposed in the literature are in fact subversion-ZK or can be made at little cost and show that Zcash is anonymous even if the parameter setup was subverted.

Tuesday, May 22, 2018

Abstract:

In this talk, we present the design and implementation of a
tool to construct faithful models of the latency, throughput, and
port usage of x86 instructions. To this end, we first discuss common
notions of instruction throughput and port usage, and introduce
a more precise definition of latency that, in contract to previous
definitions, considers dependencies between different pairs of
input and output operands. We then develop novel algorithms to
infer latency, throughput, and port usage based on automatically-
generated microbenchmarks and hardware performance counters
that are more accurate and precise than existing work.
To facilitate the rapid construction of optimizing compilers
and tools for performance prediction, the output of our tool is
provided in a machine-readable format. We provide experimental
results for processors of all generations of Intel’s Core architecture,
i.e., from Nehalem to Coffee Lake, and discuss various cases
where the output of our tool differs considerably from prior work.

Friday, May 11, 2018

10:45am Meeting room 302 (Mountain View), level 3

Itsaka Rakotonirina, PhD Student, INRIA Nancy-Grand Est., France

The DEEPSEC Prover

Abstract:

Automated verification has become an essential part in the security evaluation of cryptographic protocols. Recently, there has been a considerable effort to lift the theory and tool support that existed for reachability properties to the more complex case of equivalence properties. In this work we contribute both to the theory and practice of this verification problem. We establish new complexity results for static equivalence, trace equivalence and labelled bisimilarity and provide a decision procedure for these equivalences in the case of a bounded number of sessions. We implemented the procedure in a new tool, DEEPSEC. We showed through extensive experiments that it is significantly more efficient than other similar tools, while at the same time raises the scope of the protocols that can be analysed.

Wednesday, April 25, 2018

10:45am Meeting room 302 (Mountain View), level 3

Maleknaz Nayebi, Research Professor, University of Toronto

Analytical Release Management for Mobile Apps

Abstract:

Developing software products and maintaining software versions for adding or modifying functionality and quality to software is affected by several factors that have been traditionally analyzed under the terms “when to release” and “what to release”. Along the emergence of two-sided markets such as app stores, the release practices are changing. Moreover, the ubiquity of mobile devices has led to unprecedented growth in not only the usage of apps but also their capacity to meet people's needs. Software applications have now powered the societies, as the majority of the population is online. Each connected individual in the society is surrounded by a satellite data coming from mobile and wearable devices. The opportunities and threats of having big data of general public formed the backbone of businesses in Information Technology (IT) targeting improvements in our daily lives. Yet, any company is derived from innovation, as the economic growth and survival are only possible by dynamic adoption to users need.
The constant demand to change and satisfying the end users as well as the essence of innovation for getting higher market share is changing the release strategies and decision process. By focusing on the mobile apps as a sample of the modern software products, I discuss:
- The opportunities and threats for release management of mobile apps,
- Evolution of mobile apps over different releases,
- Discuss formulation to integrate the opportunities of app stores into planning models, and
- Providing decision support for release management of mobile applications.

Monday, April 23, 2018

12:00pm Meeting room 302 (Mountain View), level 3

Eduardo Soria-Vazquez, PhD Student, University of Bristol, UK

Large-Scale Secure Multi-Party Computation

Abstract:

Secure multi-party computation (MPC) protocols allow a group of n parties to compute some function on the parties' private inputs, while preserving a number of security properties such as privacy and correctness. The former property means that nothing leaks from the protocol execution but the computed output. The latter implies that the protocol enforces the integrity of the computation, namely, honest parties are not lead to accept a wrong output. We present a new approach to designing concretely efficient MPC protocols against honest-but-curious adversaries corrupting a majority of the parties. Motivated by the fact that within the dishonest majority setting the efficiency of most practical protocols does not depend on the number of honest parties, we investigate how to construct protocols which improve in efficiency as the number of honest parties increases.
Our central idea is to take protocols which are secure for n-1 corruptions and modify them to use short symmetric keys, with the aim of basing security on the concatenation of all honest parties' keys. This results in more efficient protocols tolerating fewer corruptions, whilst also introducing an LPN-style syndrome decoding assumption.
We first apply this technique to a modified version of the semi-honest GMW protocol, using OT extension with short keys. We also obtain more efficient constant-round MPC using BMR-style garbled circuits with short keys, and present an implementation of the online phase of this protocol. Our techniques start to improve upon existing protocols when there are around n=20 parties with h=6 honest parties, and as these increase we obtain up to a 13 times reduction (for n=400,h=120) in communication complexity for our GMW variant, compared with the best-known GMW-based protocol modified to use the same threshold.

Thursday, April 19, 2018

10:45am Meeting room 302 (Mountain View), level 3

Antonio Nappa, Researcher, Minsait, Indra

Inside Spectre and Meltdown vulnerabilities

Abstract:

In this presentation we analyze the (in)famous microarchitectureal side-channel attacks known as Spectre and Meltdown. We also analyze the patching procedure and its effectivity. Despite the big impact these vulnerabilities have and will have in the future, the effort to make the patching process reliable and definitive is far from perfect. We show an average performance impact over 40% and that without patching the CPU microcode, it is not possible to fix all the vulnerabilities.

Thursday, April 12, 2018

Optic algebras: beyond immutable data structures

Abstract:

Optics provide rich abstractions and composition patterns to access and manipulate immutable data structures. However, the state of real applications is mostly handled through databases, caches, web services, etc. In this effectful setting, the usefulness of optics is severely limited, whereas algebraic theories, thanks to their potential to abstract away from particular infrastructures, shine. Unfortunately, there is a severe lack of standard algebraic theories, e.g. like MonadState, that programmers can reuse to avoid writing their domain repositories from scratch. In this talk, we argue that optics can serve as a fruitful metaphor to design a rich catalogue of state-based algebras, and focus on the paradigmatic case of lenses. We show how lenses can be generalised into an algebraic theory; how compositionality of these algebraic theories can be founded on lens composition; and how to exploit the resulting abstractions in the modular design of data layers. We will use Coq to illustrate examples, definitions and proofs (LensAlgebra). Last, we will briefly introduce Stateless, a Scala library to bring optic algebras to the masses.

Monday, April 02, 2018

10:45am Meeting room 302 (Mountain View), level 3

Niki Vazou, Post-doctoral Researcher, University of Maryland, USA

Liquid Haskell: Usable Language-Based Program Verification

Abstract:

Formal verification has been gaining the attention and resources of both the academic and the industrial world since it prevents critical software bugs that cost money, energy, time, and even lives. Yet, software development and formal verification are decoupled, requiring verification experts to prove properties of a template – instead of the actual – implementation ported into verification specific languages. My goal is to bridge formal verification and software development for the programming language Haskell. Haskell is a unique programming language in that it is a general purpose, functional language used for industrial development, but simultaneously it stands at the leading edge of research and teaching welcoming new, experimental, yet useful features. In this talk I am presenting Liquid Haskell, a refinement type checker in which formal specifications are expressed as a combination of Haskell’s types and expressions and are automatically checked against real Haskell code. This natural integration of specifications in the language, combined with automatic checking, established Liquid Haskell as a usable verifier, enthusiastically accepted by both industrial and academic Haskell users. Recently, I turned Liquid Haskell into a theorem prover, in which arbitrary theorems about Haskell functions would be proved within the language. As a consequence, Liquid Haskell can be used in Haskell courses to teach the principles of mechanized theorem proving. Turning a general purpose language into a theorem prover opens up new research questions – e.g., can theorems be used for runtime optimizations of existing real-world applications? – that I plan to explore in the future.

Friday, March 23, 2018

10:45am Meeting room 302 (Mountain View), level 3

Jean Paul Degabriele, Post-doctoral Researcher, TU Darmstadt, Germany

The Synergy Between Theory and Practice in Cryptography

Abstract:

In this talk I will try give a flavour of my line of research by surveying some of my works. It will
touch upon topics such as IPsec, EMV, SSH, security models, backdoored randomness generators and Tor.
An overarching theme of my research is to bring cryptographic theory and practice closer.
I will give examples of how cryptographic practice fails because it is not informed by theory. On
the other hand we will also see how cryptographic schemes with security proofs can still succumb to practical attacks because our security models do not reflect practical settings well enough. I will discuss how my research tries to amend these issues and help progress towards a theory that is better aligned with practice.

Wednesday, March 21, 2018

Formal methods: from source-level safety to binary-level security

Abstract:

Several major classes of security analysis have to be performed on raw
executable files, such as vulnerability analysis of mobile code or
commercial off-the-shelf, deobfuscation or malware inspection. These
analysis are highly challenging, due to the very low-level and intricate
nature of binary code, and there is a clear need for more sophisticated
and automated tools than currently available syntactic and dynamic
approaches. On the other hand, source-level program analysis and formal
methods have made tremendous progress in the past decade, and they are
now an industrial reality for safety-critical applications.
Our long term goal is precisely to fulfill part of this gap, by
developing state-of-the-art binary-level semantic analyses. In this
talk, we first present the benefits of binary-level security analysis
and the new challenges brought to formal methods, then we describe our
early results and achievements, including the open-source BINSEC
platform and its underlying key technologies as well as case-studies on
deobfuscation and vulnerability analysis.

Tuesday, March 20, 2018

Efficient Zero-Knowledge Proofs or on how to use the juiciest piece of a freshly hunted SNARK

Abstract:

Succint Non-Interactive Arguments of Knowledge, or SNARKS, have received a lot of attention recently because of their application to provide privacy guarantees in the ZeroCash cryptocurrency. For a long time, few constructions were known and the SNARK was thought to be an elusive, mysterious creature as in the famous poem of Lewis Carroll, the Hunting of the SNARK. We now have constructions over elliptic curves which can convince a verifier of the satisfiability of circuits of arbitrary size with a constant size, very short proof. The downside is that the constructions are based on very strong and controversial assumptions, also called "knowledge assumptions", which are even known to be non-falsifiable. Unfortunately, there are known impossibility results which indicate that the use of such assumptions is unavoidable if one wants to have succint proofs.
This talk tries to answer the following research question: how much succintness does one need to give up to get rid of Knowledge Assumptions? More specifically, I will first show how to reduce the size of zero-knowledge proofs of quadratic equations (in the standard model under falsifiable assumptions). This result can be seen as an aggregation of Groth-Sahai proofs for quadratic equations, which was left as an open problem in previous work. The key technique for this result is an aggregation technique which appeared as part of a SNARK construction.
Next, I will discuss the application of these techniques to get CircuitSat proofs with improved asymptotic complexity. When the input of the circuit is not private, like some applications to verifiable computation require, our proof depends only on the depth of the circuit. This is the first asymptotic improvement we are aware of (in the standard model and under falsifiable assumptions) since the first proof due to Groth, Ostrovsky and Sahai in 2006, which is linear in the number of circuit gates.

Thursday, March 15, 2018

Abstract:

In a cyber-physical system digital computations (e.g., a hardware component or a program) interact with the physical environment. We find examples of such systems in different domains like aerospace, automotive, and medical devices where a failure in the design of the system may have serious consequences. More recently, Cyber-Physical Systems are integrating the use of mobile devices, like Android phones, due to their hardware, their rich Application Programming Interface (API), and their event-driven programming model that allows them to react to the user interaction or the physical environment. This new adoption complicates the verification of Cyber-Physical systems, due to the complexity of the software, and the lack of formal design techniques, for mobile devices.
In this talk, I will first describe my work on formal verification for cyber-physical systems based on relational abstraction, where we reduce the verification problem of a hybrid (continuous and discrete) system to the verification of a purely discrete system.
Then, I will focus on a key problem that arises when developing Android programs, the protocol violation problem. The developer of an Android application writes code (e.g., the Android application) that is invoked by the framework (e.g., Android) when receiving an external event. The main effect of this model is that the order of execution of the application code is determined both by the non-deterministic events and by the internal implementation of the framework. In these settings, a developer must not violate the protocol imposed by the framework to avoid unexpected behaviors. The main challenge is that the protocol is implicit in the framework implementation. I will show how we can precisely formalize the framework's protocol, learn it automatically using active learning techniques, and develop an algorithm that verifies the existence of protocol violations.
I will conclude the talk presenting the future research challenges in the implementation and verification of event-driven Cyber-Physical systems.

Abstract:

Today's Collaborative Economy has three challenges:
1. It relies on centralized hubs, that in order to monetize use the massive collection of personal data as business models.
2. There is a big difference of power between the owners of the infrastructure and the user communities, where decisions are concentrated on the latter.
3. The economic profits derived from the communities’ activity are again concentrated in the owners of the infrastructure.
Can we build platforms that are decentralized, democratic, and where profit is distributed? In this talk, I will present P2P Models (p2pmodels.eu), a new ERC 1.5M€ research project to build Blockchain-powered organizations which are decentralized, democratic and distribute their profits, in order to boost a new type of Collaborative Economy. The project has three legs:
1. Infrastructure: To provide a software framework to build decentralized infrastructure for Collaborative Economy organizations, providing building blocks to build agent-mediated "Decentralized Autonomous Organizations" (DAOs).
2. Governance: To enable democratic-by-design models of governance for communities, whose rules are, at least partially, encoded in the software to ensure higher levels of equality (using "smart contracts" of those DAOs).
3. Economics: To enable value distribution models which are interoperable across organizations, improving the economic sustainability of both contributors and organizations.

Tuesday, March 13, 2018

10:45am Meeting room 302 (Mountain View), level 3

Antonio Bianchi, PhD Student, University of California, Santa Barbara

Identifying and Mitigating Trust Violations in the Mobile Ecosystem

Abstract:

Mobile devices are now the most common way users handle digital information and interact with online services. Different actors, trusting each other in different ways, compose the mobile ecosystem. Users interact with apps, trusting them to access valuable and privacy-sensitive information. At the same time, apps usually communicate with remote backends and authenticate users to online services. Finally, all these interactions are mediated, on one side, by the user interface and, on the other, by the operating system.
In my research, I studied how all these different actors trust each other, and how this trust can be unfortunately violated by attackers, due to limitations on how the mobile operating systems, apps, and user interfaces are currently designed and implemented. To assist my work, I developed automated systems to perform large-scale analyses of mobile apps.
In this talk, I will describe both the tools I have developed and my findings. Specifically, I will first describe my work on how, in an Android system, it is possible to lure users to interact with malicious apps which "look like" legitimate ones. This attack completely violates the trust relationship, mediated by the user interface, between users and apps. Then, I will explain how many apps unsafely authenticate their users to remote backends, due to misplaced trust in the operating system. Finally, I will show how many apps misuse hardware-backed authentication devices, such as trusted execution environments and fingerprint readers, making them vulnerable to a variety of authentication bypass attacks. I will finish my talk presenting current open issues in the field and outlining future directions for my research.

Monday, March 12, 2018

10:45am Meeting room 302 (Mountain View), level 3

Marco Guarnieri, Post-doctoral Researcher, ETH Zurich, Switzerland

Formal foundations for access and inference control in databases

Abstract:

Databases often store and manage sensitive data. Regulating the access to databases is, therefore, essential. To this end, database security researchers have developed both access control and inference control mechanisms. Unfortunately, existing mechanisms are implemented in an ad hoc fashion, with neither precise security guarantees nor the means to verify them. This has immediate consequences as existing mechanisms are inadequate to secure modern databases and are susceptible to attacks.
In the talk, we will present two provably secure enforcement mechanisms for access and inference control in databases. The first system is ANGERONA, a provably secure inference control mechanism that prevents information leakage in the presence of probabilistic dependencies. ANGERONA is based on probabilistic logic programming, and it leverages a tractable inference algorithm for a practically relevant fragment of probabilistic logic programs.
The second part of the talk focuses on database access control. Motivated by practical attacks against commercial database systems, we present a formal framework for reasoning about the security of database access control mechanisms, and we leverage it to build a provably secure access control mechanism that thwarts attacks that existing mechanisms fail to prevent. We also present some results bridging information-flow control and database access control. The talk his based on joint work with David Basin, Srdjan Marinovic, Daniel Schoepe, Musard Balliu, and Andrei Sabelfeld.

Monday, March 05, 2018

Abstract:

The success of software engineering projects is in a large part dependent on social and organization aspects of the development community. Indeed, it not only depends on the complexity of the product or the number of requirements to be implemented, but also on people, processes, and how they impact the technical side of software development. Social debt represents patterns across the organizational structure around a software system that may lead to additional unforeseen project costs. Condescending behavior, disgruntlement or rage quitting are just some examples of social issues that may occur among the developers of a software project. While the research community has recently investigated the underlying dynamics leading to the introduction of social debt (e.g., the so-called "community smells" which represent symptoms of the presence of social problems in a community), as well as how such debt can be payed off, there is still a noticeable lack of empirical evidence on how social debt impacts software maintenance and evolution. In this talk, a framework presenting the different ways social debt can impact technical aspects of source code is presented.

Friday, February 23, 2018

10:45am Meeting room 302 (Mountain View), level 3

Zsolt István, PhD Student, ETH Zurich, Switzerland

Caribou -- Intelligent Distributed Storage for the Datacenter

Abstract:

In the era of Big Data, datacenter and cloud architectures decouple compute and storage
resources from each other for better scalability. While this design choice enables elastic scale-out, it also causes unnecessary data movements. One solution is to push parts of the computation down to storage where data can be filtered more efficiently. Systems that do this are already in use and rely either on regular server machines as storage nodes or on network attached storage devices. Even though the former provide complex computation and rich functionality since there are plenty of conventional cores available to run the offloaded computation, this solution is quite inefficient because of the over-provisioning of computing capacity and the bandwidth mismatches between storage, CPU, and network. Networked storage devices, on the other hand, are better balanced in terms of bandwidth but at the price of offering very limited options for offloading data processing.
With Caribou, we explore an alternative design that offers rich offloading functionality in a much more efficient package (size, energy consumption) than regular servers, but without sacrificing features such as a general purpose interface, reliable networking or replication for fault tolerance. Our FPGA-based prototype system has been designed such that the internal data management logic can saturate the network and the processing logic can saturate the storage bandwidth without either of the two being over-provisioned. Each Caribou node is a stand-alone FPGA that implements all functionality necessary for a distributed data store, including replication that is typically not supported by FPGA-based solutions.
Caribou has been released as open source. Its modular design and extensible processing pipeline make it a convenient platform for exploring domain-specific processing inside storage nodes.

Wednesday, February 21, 2018

Multi-view Data Analytics

Abstract:

Conventional unsupervised data analytics techniques have largely focused on processing datasets of single-type data, e.g., one of text, ECG, Sensor Readings and Image data. With increasing digitization, it has become common to have data objects having representations that encompass different "kinds" of information. For example, the same disease condition may be identified through EEG or fMRI data. Thus, a dataset of EEG-fMRI pairs would be considered as a parallel two-view dataset. Datasets of text-image pairs (e.g., a description of a seashore, and an image of it) and text-text pairs (e.g., problem-solution text, or multi-language text from machine translation scenarios) are other common instances of multi-view data. The challenge in multi-view exploratory analytics is about effectively leveraging such parallel multi-view data to perform analytics tasks such as clustering, retrieval and anomaly detection. This talk will cover some emerging trends in processing multi-view parallel data and outline the speaker's research plan in the area. This talk will cover two recent research publications authored by the speaker, one on multi-view clustering, and another on multi-view dimensionality reduction.

Abstract:

In this talk we will present the adaptive cuckoo filter (ACF), a data structure for approximate set membership that extends cuckoo filters by reacting to false positives, removing them for future queries. As an example application, in packet processing queries may correspond to flow identifiers, so a search for an element is likely to be followed by repeated searches for that element. Removing false positives can therefore significantly lower the false positive rate. The ACF, like the cuckoo filter, uses a cuckoo hash table to store fingerprints. We allow fingerprint entries to be changed in response to a false positive in a manner designed to minimize the effect on the performance of the filter. We will show that the ACF is able to significantly reduce the false positive rate by presenting both a theoretical model for the false positive rate and simulations using both synthetic data sets and real packet traces.

Tuesday, January 30, 2018

10:45am Meeting room 302 (Mountain View), level 3

Manuel Bravo, PhD Student, University of Lisboa, Portugal

Towards a Distributed Metadata Service for Causal Consistency

Abstract:

The problem of ensuring consistency in applications that manage replicated data is one of the main challenges of distributed computing. The observation that delegating consistency management entirely to the programmer makes the application code error prone and that strong consistency conflicts with availability has spurred the quest for meaningful consistency models, that can be supported effectively by the data service. Among the several invariants that may be enforced, ensuring that updates are applied and made visible respecting causality has emerged as a key ingredient among the many consistency criteria and client session guarantees that have been proposed and implemented in the last decade. Mechanisms to preserve causality can be found in systems that offer from weaker to stronger consistency guarantees. In fact, causal consistency is pivotal in the consistency spectrum, given that it has been proved to be the strongest consistency model that does not compromise availability.
In this talk, I present a novel metadata service that can be used by geo-replicated data services to efficiently ensure causal consistency across geo-locations. Its design brings two main contributions:
• It eliminates the tradeoff between throughput and data freshness inherent to previous solutions. To avoid impairing throughput, our service keeps the size of the metadata small and constant, independently of the number of clients, servers, partitions, and locations. By using clever metadata propagation techniques, we also ensure that the visibility latency of updates approximates that of weak-consistent systems that are not required to maintain metadata or to causally order operations.
• It allows data services to fully benefit from partial geo-replication, by implementing genuine partial replication, requiring datacenters to manage only the metadata concerning data items replicated locally.

Friday, January 12, 2018

Miguel Á. Carreira-Perpiñán, Professor, University of California at Merced, USA

Model compression as constrained optimization, with application to neural nets

Abstract:

Deep neural nets have become in recent years a widespread practical technology, with impressive performance in computer vision, speech recognition, natural language processing and many other applications. Deploying deep nets in mobile phones, robots, sensors and IoT devices is of great interest. However, state-of-the-art deep nets for tasks such as object recognition are too large to be deployed in these devices because of the computational limits they impose in CPU speed, memory, bandwidth, battery life or energy consumption. This has made compressing neural nets an active research problem. We give a general formulation of model compression as constrained optimization. This includes many types of compression: quantization, low-rank decomposition, pruning, lossless compression and others. Then, we give a general algorithm to optimize this nonconvex problem based on the augmented Lagrangian and alternating optimization. This results in a "learning-compression" (LC) algorithm, which alternates a learning step of the uncompressed model, independent of the compression type, with a compression step of the model parameters, independent of the learning task. This simple, efficient algorithm is guaranteed to find the best compressed model for the task in a local sense under standard assumptions. We then describe specializations of the LC algorithm for various types of compression, such as binarization, ternarization and other forms of quantization, pruning, low-rank decomposition, and other variations. We show experimentally with large deep neural nets such as ResNets that the LC algorithm can achieve much higher compression rates than previous work on deep net compression for a given target classification accuracy. For example, we can often quantize down to just 1 bit per weight with negligible accuracy degradation. This is joint work with my PhD students Yerlan Idelbayev and Arman Zharmagambetov.