SummaryThe goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.

The goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.

SummaryThe AlchemEast project is devoted to the study of alchemical theory and practice as it appeared and developed in distinct, albeit contiguous (both chronologically and geographically) areas: Graeco-Roman Egypt, Byzantium, and the Near East, from Ancient Babylonian times to the early Islamic Period. This project combines innovative textual investigations with experimental replications of ancient alchemical procedures. It uses sets of historically and philologically informed laboratory replications in order to reconstruct the actual practice of ancient alchemists, and it studies the texts and literary forms in which this practice was conceptualized and transmitted. It proposes new models for textual criticism in order to capture the fluidity of the transmission of ancient alchemical writings. AlchemEast is designed to carry out a comparative investigation of cuneiform tablets as well as a vast corpus of Greek, Syriac and Arabic writings. It will overcome the old, pejorative paradigm that dismissed ancient alchemy as a "pseudo-science", by proposing a new theoretical framework for comprehending the entirety of ancient alchemical practices and theories. Alongside established forms of scholarly output, such as critical editions of key texts, AlchemEast will provide an integrative, longue durée perspective on the many different phases of ancient alchemy. It will thus offer a radically new vision of this discipline as a dynamic and diversified art that developed across different technical and scholastic traditions. This new representation will allow us to connect ancient alchemy with medieval and early modern alchemy and thus fully reintegrate ancient alchemy in the history of pre-modern alchemy as well as in the history of ancient science more broadly.

The AlchemEast project is devoted to the study of alchemical theory and practice as it appeared and developed in distinct, albeit contiguous (both chronologically and geographically) areas: Graeco-Roman Egypt, Byzantium, and the Near East, from Ancient Babylonian times to the early Islamic Period. This project combines innovative textual investigations with experimental replications of ancient alchemical procedures. It uses sets of historically and philologically informed laboratory replications in order to reconstruct the actual practice of ancient alchemists, and it studies the texts and literary forms in which this practice was conceptualized and transmitted. It proposes new models for textual criticism in order to capture the fluidity of the transmission of ancient alchemical writings. AlchemEast is designed to carry out a comparative investigation of cuneiform tablets as well as a vast corpus of Greek, Syriac and Arabic writings. It will overcome the old, pejorative paradigm that dismissed ancient alchemy as a "pseudo-science", by proposing a new theoretical framework for comprehending the entirety of ancient alchemical practices and theories. Alongside established forms of scholarly output, such as critical editions of key texts, AlchemEast will provide an integrative, longue durée perspective on the many different phases of ancient alchemy. It will thus offer a radically new vision of this discipline as a dynamic and diversified art that developed across different technical and scholastic traditions. This new representation will allow us to connect ancient alchemy with medieval and early modern alchemy and thus fully reintegrate ancient alchemy in the history of pre-modern alchemy as well as in the history of ancient science more broadly.

SummaryTwo-dimensional transition metal dichalcogenides (2D-TMDs) are an exciting class of new materials. Their ultrathin body, optical band gap and unusual spin and valley polarization physics make them very promising candidates for a vast new range of (opto-)electronic applications. So far, most experimental work on 2D-TMDs has been performed on exfoliated flakes made by the ‘Scotch tape’ technique. The major next challenge is the large-area synthesis of 2D-TMDs by a technique that ultimately can be used for commercial device fabrication.
Building upon pure 2D-TMDs, even more functionalities can be gained from 2D-TMD alloys and heterostructures. Theoretical work on these derivates reveals exciting new phenomena, but experimentally this field is largely unexplored due to synthesis technique limitations.
The goal of this proposal is to combine atomic layer deposition with plasma chemistry to create a novel surface-controlled, industry-compatible synthesis technique that will make large area 2D-TMDs, 2D-TMD alloys and 2D-TMD heterostructures a reality. This innovative approach will enable systematic layer dependent studies, likely revealing exciting new properties, and provide integration pathways for a multitude of applications.
Atomistic simulations will guide the process development and, together with in- and ex-situ analysis, increase the understanding of the surface chemistry involved. State-of-the-art high resolution transmission electron microscopy will be used to study the alloying process and the formation of heterostructures. Luminescence spectroscopy and electrical characterization will reveal the potential of the synthesized materials for (opto)-electronic applications.
The synergy between the excellent background of the PI in 2D materials for nanoelectronics and the group’s leading expertise in ALD and plasma science is unique and provides an ideal stepping stone to develop the synthesis of large-area 2D-TMDs and derivatives.

Two-dimensional transition metal dichalcogenides (2D-TMDs) are an exciting class of new materials. Their ultrathin body, optical band gap and unusual spin and valley polarization physics make them very promising candidates for a vast new range of (opto-)electronic applications. So far, most experimental work on 2D-TMDs has been performed on exfoliated flakes made by the ‘Scotch tape’ technique. The major next challenge is the large-area synthesis of 2D-TMDs by a technique that ultimately can be used for commercial device fabrication.
Building upon pure 2D-TMDs, even more functionalities can be gained from 2D-TMD alloys and heterostructures. Theoretical work on these derivates reveals exciting new phenomena, but experimentally this field is largely unexplored due to synthesis technique limitations.
The goal of this proposal is to combine atomic layer deposition with plasma chemistry to create a novel surface-controlled, industry-compatible synthesis technique that will make large area 2D-TMDs, 2D-TMD alloys and 2D-TMD heterostructures a reality. This innovative approach will enable systematic layer dependent studies, likely revealing exciting new properties, and provide integration pathways for a multitude of applications.
Atomistic simulations will guide the process development and, together with in- and ex-situ analysis, increase the understanding of the surface chemistry involved. State-of-the-art high resolution transmission electron microscopy will be used to study the alloying process and the formation of heterostructures. Luminescence spectroscopy and electrical characterization will reveal the potential of the synthesized materials for (opto)-electronic applications.
The synergy between the excellent background of the PI in 2D materials for nanoelectronics and the group’s leading expertise in ALD and plasma science is unique and provides an ideal stepping stone to develop the synthesis of large-area 2D-TMDs and derivatives.

Max ERC Funding

1 968 709 €

Duration

Start date: 2015-08-01, End date: 2020-07-31

Project acronymALERT

ProjectALERT - The Apertif-LOFAR Exploration of the Radio Transient Sky

Summary"In our largely unchanging radio Universe, a highly dynamic component was recently discovered: flashes of bright radio emission that last only milliseconds but appear all over the sky. Some of these radio bursts can be traced to intermittently pulsating neutron stars. Other bursts however, apparently originate far outside our Galaxy. Due to great observational challenges, the evolution of the neutron stars is not understood, while more importantly, the nature of the extragalactic bursts remains an outright mystery.
My overall aim is to understand the physics that drives both kinds of brief and luminous bursts.
My primary goal is to identify the highly compact astrophysical explosions powering the extragalactic bursts. My previous surveys are the state of the art in fast-transient detection; I will now increase by a factor of 10 this exploration volume. In real-time I will provide arcsec positions, 10,000-fold more accurate than currently possible, to localize such extragalactic bursts for the first time and understand their origin.
My secondary goal is to unravel the unexplained evolution of intermittently pulsating neutron stars (building on e.g., my recent papers in Science, 2013), by doubling their number and modeling their population.
To achieve these goals, I will carry out a highly innovative survey: the Apertif-LOFAR Exploration of the Radio Transient Sky. ALERT is over an order of magnitude more sensitive than all current state-of-the art fast-transient surveys.
Through its novel, extremely wide field-of-view, Westerbork/Apertif will detect many tens of extragalactic bursts. Through real-time triggers to LOFAR I will next provide the precise localisation that is essential for radio, optical and high-energy follow-up to, for the first time, shed light on the physics and objects driving these bursts – evaporating primordial black holes; explosions in host galaxies; or, the unknown?"

"In our largely unchanging radio Universe, a highly dynamic component was recently discovered: flashes of bright radio emission that last only milliseconds but appear all over the sky. Some of these radio bursts can be traced to intermittently pulsating neutron stars. Other bursts however, apparently originate far outside our Galaxy. Due to great observational challenges, the evolution of the neutron stars is not understood, while more importantly, the nature of the extragalactic bursts remains an outright mystery.
My overall aim is to understand the physics that drives both kinds of brief and luminous bursts.
My primary goal is to identify the highly compact astrophysical explosions powering the extragalactic bursts. My previous surveys are the state of the art in fast-transient detection; I will now increase by a factor of 10 this exploration volume. In real-time I will provide arcsec positions, 10,000-fold more accurate than currently possible, to localize such extragalactic bursts for the first time and understand their origin.
My secondary goal is to unravel the unexplained evolution of intermittently pulsating neutron stars (building on e.g., my recent papers in Science, 2013), by doubling their number and modeling their population.
To achieve these goals, I will carry out a highly innovative survey: the Apertif-LOFAR Exploration of the Radio Transient Sky. ALERT is over an order of magnitude more sensitive than all current state-of-the art fast-transient surveys.
Through its novel, extremely wide field-of-view, Westerbork/Apertif will detect many tens of extragalactic bursts. Through real-time triggers to LOFAR I will next provide the precise localisation that is essential for radio, optical and high-energy follow-up to, for the first time, shed light on the physics and objects driving these bursts – evaporating primordial black holes; explosions in host galaxies; or, the unknown?"

SummaryAlfonsine astronomy is arguably among the first European scientific achievements. It shaped a scene for actors like Regiomontanus or Copernicus. There is however little detailed historical analysis encompassing its development in its full breadth. ALFA addresses this issue by studying tables, instruments, mathematical and theoretical texts in a methodologically innovative way relying on approaches from the history of manuscript cultures, history of mathematics, and history of astronomy.
ALFA integrates these approaches not only to benefit from different perspectives but also to build new questions from their interactions. For instance the analysis of mathematical practices in astral sciences manuscripts induces new ways to analyse the documents and to think about astronomical questions.
Relying on these approaches the main objectives of ALFA are thus to:
- Retrace the development of the corpus of Alfonsine texts from its origin in the second half of the 13th century to the end of the 15th century by following, on the manuscript level, the milieus fostering it;
- Analyse the Alfonsine astronomers’ practices, their relations to mathematics, to the natural world, to proofs and justification, their intellectual context and audiences;
- Build a meaningful narrative showing how astronomers in different milieus with diverse practices shaped, also from Arabic materials, an original scientific scene in Europe.
ALFA will shed new light on the intellectual history of the late medieval period as a whole and produce a better understanding of its relations to related scientific periods in Europe and beyond. It will also produce methodological breakthroughs impacting the ways history of knowledge is practiced outside the field of ancient and medieval sciences. Efforts will be devoted to bring these results not only to the relevant scholarly communities but also to a wider audience as a resource in the public debates around science, knowledge and culture.

Alfonsine astronomy is arguably among the first European scientific achievements. It shaped a scene for actors like Regiomontanus or Copernicus. There is however little detailed historical analysis encompassing its development in its full breadth. ALFA addresses this issue by studying tables, instruments, mathematical and theoretical texts in a methodologically innovative way relying on approaches from the history of manuscript cultures, history of mathematics, and history of astronomy.
ALFA integrates these approaches not only to benefit from different perspectives but also to build new questions from their interactions. For instance the analysis of mathematical practices in astral sciences manuscripts induces new ways to analyse the documents and to think about astronomical questions.
Relying on these approaches the main objectives of ALFA are thus to:
- Retrace the development of the corpus of Alfonsine texts from its origin in the second half of the 13th century to the end of the 15th century by following, on the manuscript level, the milieus fostering it;
- Analyse the Alfonsine astronomers’ practices, their relations to mathematics, to the natural world, to proofs and justification, their intellectual context and audiences;
- Build a meaningful narrative showing how astronomers in different milieus with diverse practices shaped, also from Arabic materials, an original scientific scene in Europe.
ALFA will shed new light on the intellectual history of the late medieval period as a whole and produce a better understanding of its relations to related scientific periods in Europe and beyond. It will also produce methodological breakthroughs impacting the ways history of knowledge is practiced outside the field of ancient and medieval sciences. Efforts will be devoted to bring these results not only to the relevant scholarly communities but also to a wider audience as a resource in the public debates around science, knowledge and culture.

Max ERC Funding

1 871 250 €

Duration

Start date: 2017-09-01, End date: 2022-08-31

Project acronymAlgoFinance

ProjectAlgorithmic Finance: Inquiring into the Reshaping of Financial Markets

Host Institution (HI)COPENHAGEN BUSINESS SCHOOL

Call DetailsConsolidator Grant (CoG), ERC-2016-COG

SummaryPresent-day financial markets are turning algorithmic, as market orders are increasingly being executed by fully automated computer algorithms, without any direct human intervention. Although algorithmic finance seems to fundamentally reshape the central dynamics in financial markets, and even though it prompts core sociological questions, it has not yet received any systematic attention. In a pioneering contribution to economic sociology and social studies of finance, ALGOFINANCE aims to understand how and with what consequences the turn to algorithms is changing financial markets. The overall concept and central contributions of ALGOFINANCE are the following: (1) on an intra-firm level, the project examines how the shift to algorithmic finance reshapes the ways in which trading firms operate, and does so by systematically and empirically investigating the reconfiguration of organizational structures and employee subjectivity; (2) on an inter-algorithmic level, it offers a ground-breaking methodology (agent-based modelling informed by qualitative data) to grasp how trading algorithms interact with one another in a fully digital space; and (3) on the level of market sociality, it proposes a novel theorization of how intra-firm and inter-algorithmic dynamics can be conceived of as introducing a particular form of sociality that is characteristic to algorithmic finance: a form of sociality-as-association heuristically analyzed as imitation. None of these three levels have received systematic attention in the state-of-the-art literature. Addressing them will significantly advance the understanding of present-day algorithmic finance in economic sociology. By contributing novel empirical, methodological, and theoretical understandings of the functioning and consequences of algorithms, ALGOFINANCE will pave the way for other research into digital sociology and the broader algorithmization of society.

Present-day financial markets are turning algorithmic, as market orders are increasingly being executed by fully automated computer algorithms, without any direct human intervention. Although algorithmic finance seems to fundamentally reshape the central dynamics in financial markets, and even though it prompts core sociological questions, it has not yet received any systematic attention. In a pioneering contribution to economic sociology and social studies of finance, ALGOFINANCE aims to understand how and with what consequences the turn to algorithms is changing financial markets. The overall concept and central contributions of ALGOFINANCE are the following: (1) on an intra-firm level, the project examines how the shift to algorithmic finance reshapes the ways in which trading firms operate, and does so by systematically and empirically investigating the reconfiguration of organizational structures and employee subjectivity; (2) on an inter-algorithmic level, it offers a ground-breaking methodology (agent-based modelling informed by qualitative data) to grasp how trading algorithms interact with one another in a fully digital space; and (3) on the level of market sociality, it proposes a novel theorization of how intra-firm and inter-algorithmic dynamics can be conceived of as introducing a particular form of sociality that is characteristic to algorithmic finance: a form of sociality-as-association heuristically analyzed as imitation. None of these three levels have received systematic attention in the state-of-the-art literature. Addressing them will significantly advance the understanding of present-day algorithmic finance in economic sociology. By contributing novel empirical, methodological, and theoretical understandings of the functioning and consequences of algorithms, ALGOFINANCE will pave the way for other research into digital sociology and the broader algorithmization of society.

SummaryAs diploid organisms inherit one gene copy from each parent, a gene can be expressed from both alleles (biallelic) or from only one allele (monoallelic). Although transcription from both alleles is detected for most genes in cell population experiments, little is known about allele-specific expression in single cells and its phenotypic consequences. To answer fundamental questions about allelic transcription heterogeneity in single cells, this research program will focus on single-cell transcriptome analyses with allelic-origin resolution. To this end, we will investigate both clonally stable and dynamic random monoallelic expression across a large number of cell types, including cells from embryonic and adult stages. This research program will be accomplished with the novel single-cell RNA-seq method developed within my lab to obtain quantitative, genome-wide gene expression measurement. To distinguish between mitotically stable and dynamic patterns of allelic expression, we will analyze large numbers a clonally related cells per cell type, from both primary cultures (in vitro) and using transgenic models to obtain clonally related cells in vivo.
The biological significance of the research program is first an understanding of allelic transcription, including the nature and extent of random monoallelic expression across in vivo tissues and cell types. These novel insights into allelic transcription will be important for an improved understanding of how variable phenotypes (e.g. incomplete penetrance and variable expressivity) can arise in genetically identical individuals. Additionally, the single-cell transcriptome analyses of clonally related cells in vivo will provide unique insights into the clonality of gene expression per se.

As diploid organisms inherit one gene copy from each parent, a gene can be expressed from both alleles (biallelic) or from only one allele (monoallelic). Although transcription from both alleles is detected for most genes in cell population experiments, little is known about allele-specific expression in single cells and its phenotypic consequences. To answer fundamental questions about allelic transcription heterogeneity in single cells, this research program will focus on single-cell transcriptome analyses with allelic-origin resolution. To this end, we will investigate both clonally stable and dynamic random monoallelic expression across a large number of cell types, including cells from embryonic and adult stages. This research program will be accomplished with the novel single-cell RNA-seq method developed within my lab to obtain quantitative, genome-wide gene expression measurement. To distinguish between mitotically stable and dynamic patterns of allelic expression, we will analyze large numbers a clonally related cells per cell type, from both primary cultures (in vitro) and using transgenic models to obtain clonally related cells in vivo.
The biological significance of the research program is first an understanding of allelic transcription, including the nature and extent of random monoallelic expression across in vivo tissues and cell types. These novel insights into allelic transcription will be important for an improved understanding of how variable phenotypes (e.g. incomplete penetrance and variable expressivity) can arise in genetically identical individuals. Additionally, the single-cell transcriptome analyses of clonally related cells in vivo will provide unique insights into the clonality of gene expression per se.

SummaryBrain and spinal cord diseases affect 38% of the European population and cost over 800 billion € annually; representing by far the largest health challenge. ALS is a prevalent neurological disease caused by motor neuron death with an invariably fatal outcome. I contributed to ALS research with the groundbreaking discovery of TDP-43 mutations, functionally characterized these mutations in the first vertebrate model and demonstrated a genetic interaction with another major ALS gene FUS. Emerging evidence indicates that four major causative factors in ALS, C9orf72, TDP-43, FUS & SQSTM1, genetically interact and could function in common cellular mechanisms. Here, I will develop zebrafish transgenic lines for all four genes, using state of the art genomic editing tools to combine simultaneous gene knockout and expression of the mutant alleles. Using these innovative disease models I will study the functional interactions amongst these four genes and their converging effect on key ALS pathogenic mechanisms: autophagy degradation, stress granule formation and RNA regulation. These studies will permit to pinpoint the molecular cascades that underlie ALS-related neurodegeneration. We will further expand the current ALS network by proposing and validating novel genetic interactors, which will be further screened for disease-causing variants and as pathological markers in patient samples. The power of zebrafish as a vertebrate model amenable to high-content phenotype-based screens will enable discovery of bioactive compounds that are neuroprotective in multiple animal models of disease. This project will increase the fundamental understanding of the relevance of C9orf72, TDP-43, FUS and SQSTM1 by developing animal models to characterize common pathophysiological mechanisms. Furthermore, I will uncover novel genetic, disease-related and pharmacological modifiers to extend the ALS network that will facilitate development of therapeutic strategies for neurodegenerative disorders

Brain and spinal cord diseases affect 38% of the European population and cost over 800 billion € annually; representing by far the largest health challenge. ALS is a prevalent neurological disease caused by motor neuron death with an invariably fatal outcome. I contributed to ALS research with the groundbreaking discovery of TDP-43 mutations, functionally characterized these mutations in the first vertebrate model and demonstrated a genetic interaction with another major ALS gene FUS. Emerging evidence indicates that four major causative factors in ALS, C9orf72, TDP-43, FUS & SQSTM1, genetically interact and could function in common cellular mechanisms. Here, I will develop zebrafish transgenic lines for all four genes, using state of the art genomic editing tools to combine simultaneous gene knockout and expression of the mutant alleles. Using these innovative disease models I will study the functional interactions amongst these four genes and their converging effect on key ALS pathogenic mechanisms: autophagy degradation, stress granule formation and RNA regulation. These studies will permit to pinpoint the molecular cascades that underlie ALS-related neurodegeneration. We will further expand the current ALS network by proposing and validating novel genetic interactors, which will be further screened for disease-causing variants and as pathological markers in patient samples. The power of zebrafish as a vertebrate model amenable to high-content phenotype-based screens will enable discovery of bioactive compounds that are neuroprotective in multiple animal models of disease. This project will increase the fundamental understanding of the relevance of C9orf72, TDP-43, FUS and SQSTM1 by developing animal models to characterize common pathophysiological mechanisms. Furthermore, I will uncover novel genetic, disease-related and pharmacological modifiers to extend the ALS network that will facilitate development of therapeutic strategies for neurodegenerative disorders

SummaryThis interdisciplinary project investigates the transformation of Shii Islam in the Middle East and Europe since the 1950s. The project examines the formation of modern Shii communal identities and the role Shii clerical authorities and their transnational networks have played in their religio-political mobilisation. The volatile situation post-Arab Spring, the rise of militant movements such as ISIS and the sectarianisation of geopolitical conflicts in the Middle East have intensified efforts to forge distinct Shii communal identities and to conceive Shii Muslims as part of an alternative umma (Islamic community). The project focusses on Iran, Iraq and significant but unexplored diasporic links to Syria, Kuwait and Britain. In response to the rise of modern nation-states in the Middle East, Shii clerical authorities resorted to a wide range of activities: (a) articulating intellectual responses to the ideologies underpinning modern Middle Eastern nation-states, (b) forming political parties and other platforms of socio-political activism and (c) using various forms of cultural production by systematising and promoting Shii ritual practices and utilising visual art, poetry and new media.
The project yields a perspectival shift on the factors that led to Shii communal mobilisation by:
- Analysing unacknowledged intellectual responses of Shii clerical authorities to the secular or sectarian ideologies of post-colonial nation-states and to the current sectarianisation of geopolitics in the Middle East.
- Emphasising the central role of diasporic networks in the Middle East and Europe in mobilising Shii communities and in influencing discourses and agendas of clerical authorities based in Iraq and Iran.
- Exploring new modes of cultural production in the form of a modern Shii aesthetics articulated in ritual practices, visual art, poetry and new media and thus creating a more holistic narrative on Shii religio-political mobilisation.

This interdisciplinary project investigates the transformation of Shii Islam in the Middle East and Europe since the 1950s. The project examines the formation of modern Shii communal identities and the role Shii clerical authorities and their transnational networks have played in their religio-political mobilisation. The volatile situation post-Arab Spring, the rise of militant movements such as ISIS and the sectarianisation of geopolitical conflicts in the Middle East have intensified efforts to forge distinct Shii communal identities and to conceive Shii Muslims as part of an alternative umma (Islamic community). The project focusses on Iran, Iraq and significant but unexplored diasporic links to Syria, Kuwait and Britain. In response to the rise of modern nation-states in the Middle East, Shii clerical authorities resorted to a wide range of activities: (a) articulating intellectual responses to the ideologies underpinning modern Middle Eastern nation-states, (b) forming political parties and other platforms of socio-political activism and (c) using various forms of cultural production by systematising and promoting Shii ritual practices and utilising visual art, poetry and new media.
The project yields a perspectival shift on the factors that led to Shii communal mobilisation by:
- Analysing unacknowledged intellectual responses of Shii clerical authorities to the secular or sectarian ideologies of post-colonial nation-states and to the current sectarianisation of geopolitics in the Middle East.
- Emphasising the central role of diasporic networks in the Middle East and Europe in mobilising Shii communities and in influencing discourses and agendas of clerical authorities based in Iraq and Iran.
- Exploring new modes of cultural production in the form of a modern Shii aesthetics articulated in ritual practices, visual art, poetry and new media and thus creating a more holistic narrative on Shii religio-political mobilisation.

Max ERC Funding

1 952 374 €

Duration

Start date: 2018-01-01, End date: 2022-12-31

Project acronymALUNIF

ProjectAlgorithms and Lower Bounds: A Unified Approach

Researcher (PI)Rahul Santhanam

Host Institution (HI)THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD

Call DetailsConsolidator Grant (CoG), PE6, ERC-2013-CoG

SummaryOne of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.

One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.