Dominik Csiba

Team Leader for Algorithms

Innovatrics

NEWS

Thesis submitted

11. September 2017

I have succesfully submitted my PhD thesis, which is currently with my oponnents. My viva should take place in November. After that, I will be officially finished with my PhD studies. In the meantime I have started as a part-time teacher at LEAF Academy, teaching a AP Calculus. I spend the rest of my work-time at the company Operam, where I switched my part-time role for a full-time position.

Back in Slovakia

5. January 2017

Since mid-December I have moved to Slovakia for good. I will be finishing my PhD studies remotely and handing in my thesis in August 2017. Goodbye Edinburgh! Also, I have two upcoming events. First, I will be a mentor at Basecamp, a Data Science Bootcamp in Vienna. I will be lecturing on introduction to machine learning and optimization. Secondly, I will have a talk in Bratislava during a brand new series called (no surprises here) Machine Learning Meetups. I will talk on the role of optimization in machine learning. Looking forward to both of these events!

IMA Numerical Linear Algebra and Optimization

9. September 2016

I have just returned from Birmingham, where I attended a conference on Numerical Linear Algebra and Optimization. Our group organized two minisymposia, and I gave a talk on Importance sampling for minibatches (paper, slides) in one of them. All in all, the conference was nice.

ESSAY

31. August 2016

I have prepared a two page essay as a part of my 2nd year report at the University of Edinburgh. The essay is basically a short report on my work and achievements during the first two years of my PhD. It should be accessible to all mathematicians. It is available here.

MATHEMATICS IN MACHINE LEARNING

14. March 2016

OPTIMIZATION WITHOUT BORDERS

12. February 2016

I spent the last week in Les Houches on a very nice workshop devoted to 60th birthday of the great Yurii Nesterov. It was a great pleasure to be on a workshop with the top researchers in Statistical Learning. The weather was not very good, but we managed to get in a full day of skiing. Here is a photo of all the participants. What a week!

NEW PAPER OUT!

9. February 2016

Today we uploaded a new paper to arXiv. It is my first paper based on my very own idea. The paper is a joint work with my supervisor Peter. The paper is called "Importance sampling for Minibatches" and it can be found here. The paper is short and neat, I hope you will enjoy it as much as I do!

MATHEMATICS IN MACHINE LEARNING COURSE

23. January 2016

I am going to organize a course on Mathematics in Machine Learning at my "Alma-Mater" - Faculty of Mathematics, Physics and Informatics of the Comenius University. The course will be held between 8-17 April. All the information can be found here. It's going to be great!

Organizer

Trojsten is a Slovak NGO working with high-school children talented in Mathematics, Physics and Computer Science.
We are annually organizing competitions, which consists of selecting problems and marking the solutions. Also, we organize week-long camps for the most successful participants.
In total I helped selecting problems and marking solutions for 20+ series, organized 5+ camps and gave 20+ talks to high-school children.

VOLUNTEER

2014

2015

Edinburgh

Treasurer at the Edinburgh Chapter

SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS

I was responsible for the finance and as the whole chapter we jointly organised a few events.

Edinburgh, UK

In this paper we introduce two novel generalizations of the theory for gradient descent type methods in the proximal setting. First, we introduce the proportion function, which we further use to analyze all known (and many new) block-selection rules for block coordinate descent methods under a single framework. This framework includes randomized methods with uniform, non-uniform or even adaptive sampling strategies, as well as deterministic methods with batch, greedy or cyclic selection rules. Second, the theory of strongly-convex optimization was recently generalized to a specific class of non-convex functions satisfying the so-called Polyak-{\L}ojasiewicz condition. To mirror this generalization in the weakly convex case, we introduce the Weak Polyak-{\L}ojasiewicz condition, using which we give global convergence guarantees for a class of non-convex functions previously not considered in theory. Additionally, we establish (necessarily somewhat weaker) convergence guarantees for an even larger class of non-convex functions satisfying a certain smoothness assumption only. By combining the two abovementioned generalizations we recover the state-of-the-art convergence guarantees for a large class of previously known methods and setups as special cases of our general framework. Moreover, our frameworks allows for the derivation of new guarantees for many new combinations of methods and setups, as well as a large class of novel non-convex objectives. The flexibility of our approach offers a lot of potential for future research, as a new block selection procedure will have a convergence guarantee for all objectives considered in our framework, while a new objective analyzed under our approach will have a whole fleet of block selection rules with convergence guarantees readily available.

We consider online optimization in the 1-lookahead setting, where the objective does not decompose additively over the rounds of the online game. The resulting formulation enables us to deal with non-stationary and/or long-term constraints , which arise, for example, in online display advertising problems. We propose an on-line primal-dual algorithm for which we obtain dynamic cumulative regret guarantees. They depend on the convexity and the smoothness of the non-additive penalty, as well as terms capturing the smoothness with which the residuals of the non-stationary and long-term constraints vary over the rounds. We conduct experiments on synthetic data to illustrate the benefits of the non-additive penalty and show vanishing regret convergence on live traffic data collected by a display advertising platform in production.

29MAY2016

Coordinate Descent Faceoff: Primal or Dual?

Edinburgh, UK

This paper is submitted.

ArXivDominik Csiba, Peter Richtárik

Coordinate Descent Faceoff: Primal or Dual?

Dominik Csiba, Peter RichtárikArXiv

Randomized coordinate descent (RCD) methods are state-of-the-art algorithms for training linear predictors via minimizing regularized empirical risk. When the number of examples (n) is much larger than the number of features (d), a common strategy is to apply RCD to the dual problem. On the other hand, when the number of features is much larger than the number of examples, it makes sense to apply RCD directly to the primal problem. In this paper we provide the first joint study of these two approaches when applied to L2-regularized ERM. First, we show through a rigorous analysis that for dense data, the above intuition is precisely correct. However, we find that for sparse and structured data, primal RCD can significantly outperform dual RCD even if d≪n, and vice versa, dual RCD can be much faster than primal RCD even if n≪d. Moreover, we show that, surprisingly, a single sampling strategy minimizes both the (bound on the) number of iterations and the overall expected complexity of RCD. Note that the latter complexity measure also takes into account the average cost of the iterations, which depends on the structure and sparsity of the data, and on the sampling strategy employed. We confirm our theoretical predictions using extensive experiments with both synthetic and real data sets.

9FEB2016

Importance Sampling for Minibatches

Edinburgh, UK

This paper is submitted.

ArXivDominik Csiba, Peter Richtárik

Importance Sampling for Minibatches

Dominik Csiba, Peter RichtárikArXiv

Minibatching is a very well studied and highly popular technique in supervised learning, used by practitioners due to its ability to accelerate training through better utilization of parallel processing power and reduction of stochastic variance. Another popular technique is importance sampling – a strategy for preferential sampling of more important examples also capable of accelerating the training process. However, despite considerable effort by the community in these areas, and due to the inherent technical difficulty of the problem, there is no existing work combining the power of importance sampling with the strength of minibatching. In this paper we propose the first importance sampling for minibatches and give simple and rigorous complexity analysis of its performance. We illustrate on synthetic problems that for training data of certain properties, our sampling can lead to several orders of magnitude improvement in training time. We then test the new sampling on several popular datasets, and show that the improvement can reach an order of magnitude.

Edinburgh, UK

This paper was published in the Proceedings of ICML 2015. Best Contribution Award (2nd Place) in Optimization and Big Data 2015, Edinburgh.

ConferencesDominik Csiba, Zheng Qu, Peter Richtárik

Stochastic Dual Coordinate Ascent with Adaptive Probabilities

Dominik Csiba, Zheng Qu, Peter RichtárikConferences

This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization problems. Our modification consists in allowing the method adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves provably better complexity bound than SDCA with the best fixed probability distribution, known as importance sampling. However, it is of a theoretical character as it is expensive to implement. We also propose AdaSDCA+: a practical variant which in our experiments outperforms existing non-adaptive methods.

Edinburgh, UK

In this work we develop a new algorithm for regularized empirical risk minimization. Our method extends recent techniques of Shalev-Shwartz [02/2015], which enable a dual-free analysis of SDCA, to arbitrary mini-batching schemes. Moreover, our method is able to better utilize the information in the data defining the ERM problem. For convex loss functions, our complexity results match those of QUARTZ, which is a primal-dual method also allowing for arbitrary mini-batching schemes. The advantage of a dual-free analysis comes from the fact that it guarantees convergence even for non-convex loss functions, as long as the average loss is convex. We illustrate through experiments the utility of being able to design arbitrary mini-batching schemes.