Sebastian Weichwald.

I am an advocate of pragmatic causal modelling and aim at bringing statistical causal modelling from pen and paper to fruitful application.
We do conceptual work on how our ability to causally reason about a system depends on the variables and transformations thereof being used as descriptors (UAI paper).
We were the first to provide a comprehensive set of causal interpretation rules for encoding and decoding models in neuroimaging studies (NeuroImage paper, explainer video (5 min)).

Our CoCaLa Team won the Causality 4 Climate NeurIPS competition! Among all 190 competitors, with 40 very active, we won the most categories with 18 out of 34, came in second place in all remaining 16 categories, and won the overall competition by achieving an average AUC-ROC score of 0.917 (2nd and 3rd place achieved 0.722 and 0.676, respectively). Congrats and thanks to many great teams and thanks to the organisers for putting a fun competition together. You can check out our slides here, re-watch the NeurIPS session here, read more on the competition results, and check out our brief article and code.

Preprints.

Distributional robustness as a guiding principle for causality in cognitive neuroscience

S Weichwald,
J Peters

We outline why we believe that distributional robustness and model generalisability can be useful for guiding causality research in cognitive neuroscience. In particular, it can help with respect to the scarcity of targeted interventional data and the difficulty of defining the right variables. We provide an accessible introduction to causality and review selected causal discovery approaches and their underlying ideas, assumptions, and problems.

Causal structure learning from time series: Large regression coefficients may predict causal links better in practice than small p-values

S Weichwald,
ME Jakobsen,
PB Mogensen,
L Petersen,
N Thams,
G Varando

We describe the algorithms for causal structure learning from time series data that won the NeurIPS competition »Causality 4 Climate« 2019. We examine why large regression coefficients may predict causal links better in practice than small p-values and
thus why normalising the data may sometimes hinder causal structure learning. The algorithms are available at tidybench.

coroICA, confounding-robust ICA, extends the ordinary ICA model to incorporate any group-wise stationary noise and provides a justified alternative to the use of ICA on data blindly pooled across groups (e.g. subjects). We explain its causal interpretation and motivation, provide an efficient estimation procedure, prove identifiability under mild assumptions, and demonstrate applicability to EEG data.

Causal Consistency of Structural Equation Models

Ideally, causal models of the same system should be consistent with one another in the sense that they agree in their predictions of the effects of interventions. We formalise this notion of consistency in the case of Structural Equation Models (SEMs) by introducing exact transformations between SEMs.

Absence of EEG correlates of self-referential processing depth in ALS

We find that electroencephalography (EEG) correlates of self-referential thinking are present in healthy individuals, but not in those with ALS. In particular, thinking about themselves or others significantly modulates the bandpower in the medial prefrontal cortex in healthy individuals, but not in ALS patients.

MERLiN: Mixture Effect Recovery in Linear Networks

MERLiN is a causal inference algorithm that can recover from an observed linear mixture a causal variable that is an effect of another given variable. MERLiN implements a novel idea on how to (re-)construct causal variables and is robust against hidden confounding.

Pymanopt: A Python Toolbox for Optimization on Manifolds using Automatic Differentiation

Pymanopt lowers the barriers to users wishing to use state of the art manifold optimization techniques, by using automated differentiation for calculating derivative information, saving users time and saving them from potential calculation and implementation errors.(Example: manifold optimisation for inferring parameters of a MoG model.)

Causal interpretation rules for encoding and decoding models in neuroimaging

We provide a set of rules which causal statements are warranted and which ones are not supported by empirical evidence. Especially, only encoding models in the stimulus-based setting support unambiguous causal interpretations. By combining encoding and decoding models, however, we obtain insights into causal relations beyond those that are implied by each individual model type.

Causal and anti-causal learning in pattern recognition for neuroimaging

In this paper, we argue that it is not sufficient to distinguish between encoding- and decoding models: The interpretation of such models depends on whether they are employed in a stimulus- or response-based setting.

Decoding index finger position from EEG using random forests

In this work it is shown that index finger positions can be differentiated from non-invasive EEG recordings in healthy human subjects. Among the different spectral features investigated, high β-power (20–30 Hz) over contralateral sensorimotor cortex carried most information about finger position.

Working papers.

In a collaboration with the University Heart Center Zurich and the ETH Zurich we are developing robust models for predicting 1-year mortality after acute coronary syndromes. Our approach is robust, interpretable, only uses objective measurements as input instead of subjective assessments by clinicians, and improves performance upon state-of-the-art.

Thesis.

Pragmatism and Variable Transformations in Causal Modelling

S WeichwaldETH Zurich, 2019

The statistical treatment of causal modelling lays out methodology that, under well specified assumptions, enables us to infer cause-effect relationships from observational data. The adoption and fruitful utilisation of such methods remains limited, however, despite the statistical foundations and numerous theoretical advances. In this thesis, we present contributions towards closing the gap between statistical causal modelling and its successful application.

Other.

The right tool for the right question — beyond the encoding versus decoding dichotomy

S Weichwald,
M Grosse-Wentrup

2017. In this commentary, we construct two simple and analytically tractable examples to provide further intuition about the problems with interpreting encoding and decoding models. We argue that if we want to understand how the brain generates cognition, we need to move beyond the encoding versus decoding dichotomy and instead discuss and develop tools that are specifically tailored to our endeavour.

A note on the expected minimum error probability in equientropic channels

S Weichwald,
T Fomina,
B Schölkopf,
M Grosse-Wentrup

2017. In this note, we characterise the quality of a code (i. e. a given encoding routine) by an upper bound on the expected minimum error probability that can be achieved when using this code. We show that for equientropic channels this upper bound is minimal for codes with maximal marginal entropy.

What is Cantor's continuum problem?

S Weichwald

2013. This seminar paper reviews Kurt Gödel's article »What is Cantor's continuum problem?«. As this paper aims to be almost self-contained, short recaps, rough explanations and selective examples are provided where appropriate.

Langton's Ant (MATLAB-Simulation)

S Weichwald

2011. A few small scripts which allow to simulate the ant's behaviour within different two-dimensional grids with different kinds of borders. The ant is represented by a little red triangle which allows to indicate the current direction. One can follow the ant move by move, step by step or in fast-forward mode.

Talk on »Pragmatic Causal Modelling and Variable Transformations« at the Copenhagen Causality Lab (CoCaLa), University of Copenhagen, Copenhagen, Denmark

2019.

Talk on »Causal Consistency of SEMs & Causal Models as Posets of Distributions« at the Oberwolfach Workshop »Foundations and New Horizons for Causal Inference«, Mathematical Research Institute of Oberwolfach (MFO), Germany

2017.

Talk on »Bridging the Gap: Causality in the Wild« at the Institute for Machine Learning, ETH Zurich, Zurich, Switzerland

2017.

Talk on »Bridging the Gap: Causal Inference in Neuroimaging« at the Division of Clinical Psychiatry Research, University of Zurich, Zurich, Switzerland, Host: DR Bach

2016.

Talk on »How to obtain causal hypotheses from neuroimaging studies« at the symposium »What Neuroimaging Can Tell Us? From Correlation to Causation and Cognitive Ontologies«, also with C Herrmann, M Lindquist, and R Poldrack, at the annual meeting of the Organization for Human Brain Mapping (OHBM), Geneva, Switzerland

2016.

Conference talk on »Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data« at the International Workshop on Pattern Recognition in Neuroimaging (PRNI), Trento, Italy

We are happy to announce that Nicolas Boumal and Bamdev Mishra (both core developers of manopt) are joining the pymanopt team as maintainers. This will improve integration of new methods as well as maintenance level, and will also help to slowly grow the python userbase transitioning away from non-open non-free matlab.On another positive note, it appears that FAIR may be using our toolbox...pssst ;-)...which resulted in this pull request by Leon Bottou from Facebook AI Research that could bring PyTorch support to pymanopt in the very near future.

Oct 2018.

Aaron Bahde successfully completed his essay rotation with me on "Different Notions of Causality employed in fMRI Analysis" as part of his master's studies in Neural Information Processing – Congratulations!

I am happy to confirm the speakers for the causality workshop in July that I am organising. We will have Frederick Eberhardt(Caltech) presenting work on micro and macro causal variables, Caroline Uhler(MIT) on causality in genomics, as well as talks on causality and fairness, group invariance principles for causal inference, and the detection of confounding via typicality principles.

I got awarded a CLS exchange fellowship to fund my 6 months research stay at ETH Zurich. I am looking forward to a collaboration with the cardiology section of the University Hospital Zurich as well as TAing for the machine learning lecture at ETH where we organise practical machine learning challenges for ~400 students.

We have released an early version of Pymanopt: A Python Toolbox for Manifold Optimization using Automatic Differentiation. This example demonstrates how to infer the parameters of a Mixture of Gaussian (MoG) model using manifold optimisation instead of expectation maximisation (EM).

Completed my master's at UCL with a thesis on causal effect recovery from linear mixtures. It's time for holidays in the United States!
Besides an exciting road trip I am also looking forward to interesting intermezzi: I will present our recent work at the Poldrack Lab(Stanford University, September), the LIINC group(Columbia University, October), and will visit Martin Lindquist(Johns Hopkins University, October).

Moritz will present our recent work at this year's PRNI workshop (Stanford University, June).
I have been invited for talks at the FMRIB Analysis Group(University of Oxford, July)
and the LIINC group(Columbia University, October).
Furthermore, I will be visiting Martin Lindquist(Johns Hopkins University, October).
– Looking forward to meeting inspiring people and having interesting discussions!

Nov 2014.

A golden oldie worth a reread: Data Set Selection by Doudou LaLoudouana and Mambobo Bonouliqui Tarare.