SummaryAdvanced insight into ever smaller structures of matter and their ever faster dynamics hold promise for pushing the frontiers of many fields in science and technology. Time-domain investigations of ultrafast microscopic processes are most successfully carried out by pump/probe experiments. Intense waveform-controlled few-cycle near-infrared laser pulses combined with isolated sub-femtosecond XUV (extreme UV) pulses have made possible direct access to electron motion on the atomic scale. These tools along with the techniques of laser-field-controlled XUV photoemission (“attosecond streaking”) and ultrafast UV-pump/XUV-probe spectroscopy have permitted real-time observation of electronic motion in experiments performed on atoms in the gas phase and of electronic transport processes in solids.
The purpose of this project is to to get insight into intra- and inter-molecular electron dynamics by extending attosecond spectroscopy to these processes. AEDMOS will allow control and real-time observation of a wide range of hyperfast fundamental processes directly on their natural, i.e. attosecond (1 as = EXP-18 s) time scale in molecules and molecular structures. In previous work we have successfully developed attosecond tools and techniques. By combining them with our experience in UHV technology and target preparation in a new beamline to be created in the framework of this project, we aim at investigating charge migration and transport in supramolecular assemblies, ultrafast electron dynamics in photocatalysis and dynamics of electron correlation in high-TC superconductors. These dynamics – of electronic excitation, exciton formation, relaxation, electron correlation and wave packet motion – are of broad scientific interest reaching from biomedicine to chemistry and physics and are pertinent to the development of many modern technologies including molecular electronics, optoelectronics, photovoltaics, light-to-chemical energy conversion and lossless energy transfer.

Advanced insight into ever smaller structures of matter and their ever faster dynamics hold promise for pushing the frontiers of many fields in science and technology. Time-domain investigations of ultrafast microscopic processes are most successfully carried out by pump/probe experiments. Intense waveform-controlled few-cycle near-infrared laser pulses combined with isolated sub-femtosecond XUV (extreme UV) pulses have made possible direct access to electron motion on the atomic scale. These tools along with the techniques of laser-field-controlled XUV photoemission (“attosecond streaking”) and ultrafast UV-pump/XUV-probe spectroscopy have permitted real-time observation of electronic motion in experiments performed on atoms in the gas phase and of electronic transport processes in solids.
The purpose of this project is to to get insight into intra- and inter-molecular electron dynamics by extending attosecond spectroscopy to these processes. AEDMOS will allow control and real-time observation of a wide range of hyperfast fundamental processes directly on their natural, i.e. attosecond (1 as = EXP-18 s) time scale in molecules and molecular structures. In previous work we have successfully developed attosecond tools and techniques. By combining them with our experience in UHV technology and target preparation in a new beamline to be created in the framework of this project, we aim at investigating charge migration and transport in supramolecular assemblies, ultrafast electron dynamics in photocatalysis and dynamics of electron correlation in high-TC superconductors. These dynamics – of electronic excitation, exciton formation, relaxation, electron correlation and wave packet motion – are of broad scientific interest reaching from biomedicine to chemistry and physics and are pertinent to the development of many modern technologies including molecular electronics, optoelectronics, photovoltaics, light-to-chemical energy conversion and lossless energy transfer.

Max ERC Funding

1 999 375 €

Duration

Start date: 2015-05-01, End date: 2020-04-30

Project acronymAEROSOL

ProjectAstrochemistry of old stars:direct probing of unique chemical laboratories

Researcher (PI)Leen Katrien Els Decin

Host Institution (HI)KATHOLIEKE UNIVERSITEIT LEUVEN

Call DetailsConsolidator Grant (CoG), PE9, ERC-2014-CoG

SummaryThe gas and dust in the interstellar medium (ISM) drive the chemical evolution of galaxies, the formation of stars and planets, and the synthesis of complex prebiotic molecules. The prime birth places for this interstellar material are the winds of evolved (super)giant stars. These winds are unique chemical laboratories, in which a large variety of gas and dust species radially expand away from the star.
Recent progress on the observations of these winds has been impressive thanks to Herschel and ALMA. The next challenge is to unravel the wealth of chemical information contained in these data. This is an ambitious task since (1) a plethora of physical and chemical processes interact in a complex way, (2) laboratory data to interpret these interactions are lacking, and (3) theoretical tools to analyse the data do not meet current needs.
To boost the knowledge of the physics and chemistry characterizing these winds, I propose a world-leading multi-disciplinary project combining (1) high-quality data, (2) novel theoretical wind models, and (3) targeted laboratory experiments. The aim is to pinpoint the dominant chemical pathways, unravel the transition from gas-phase to dust species, elucidate the role of clumps on the overall wind structure, and study the reciprocal effect between various dynamical and chemical phenomena.
Now is the right time for this ambitious project thanks to the availability of (1) high-quality multi-wavelength data, including ALMA and Herschel data of the PI, (2) supercomputers enabling a homogeneous analysis of the data using sophisticated theoretical wind models, and (3) novel laboratory equipment to measure the gas-phase reaction rates of key species.
This project will have far-reaching impact on (1) the field of evolved stars, (2) the understanding of the chemical lifecycle of the ISM, (3) chemical studies of dynamically more complex systems, such as exoplanets, protostars, supernovae etc., and (4) it will guide new instrument development.

The gas and dust in the interstellar medium (ISM) drive the chemical evolution of galaxies, the formation of stars and planets, and the synthesis of complex prebiotic molecules. The prime birth places for this interstellar material are the winds of evolved (super)giant stars. These winds are unique chemical laboratories, in which a large variety of gas and dust species radially expand away from the star.
Recent progress on the observations of these winds has been impressive thanks to Herschel and ALMA. The next challenge is to unravel the wealth of chemical information contained in these data. This is an ambitious task since (1) a plethora of physical and chemical processes interact in a complex way, (2) laboratory data to interpret these interactions are lacking, and (3) theoretical tools to analyse the data do not meet current needs.
To boost the knowledge of the physics and chemistry characterizing these winds, I propose a world-leading multi-disciplinary project combining (1) high-quality data, (2) novel theoretical wind models, and (3) targeted laboratory experiments. The aim is to pinpoint the dominant chemical pathways, unravel the transition from gas-phase to dust species, elucidate the role of clumps on the overall wind structure, and study the reciprocal effect between various dynamical and chemical phenomena.
Now is the right time for this ambitious project thanks to the availability of (1) high-quality multi-wavelength data, including ALMA and Herschel data of the PI, (2) supercomputers enabling a homogeneous analysis of the data using sophisticated theoretical wind models, and (3) novel laboratory equipment to measure the gas-phase reaction rates of key species.
This project will have far-reaching impact on (1) the field of evolved stars, (2) the understanding of the chemical lifecycle of the ISM, (3) chemical studies of dynamically more complex systems, such as exoplanets, protostars, supernovae etc., and (4) it will guide new instrument development.

Host Institution (HI)IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE

Call DetailsConsolidator Grant (CoG), PE8, ERC-2017-COG

SummaryGas turbines are an essential ingredient in the long-term energy and aviation mix. They are flexible, offer fast start-up and the ability to burn renewable-generated fuels. However, they generate NOx emissions, which cause air pollution and damage human health, and reducing these is an air quality imperative. A major hurdle to this is that lean premixed combustion, essential for further NOx emission reductions, is highly susceptible to thermoacoustic instability. This is caused by a two-way coupling between unsteady combustion and acoustic waves, and the resulting large pressure oscillations can cause severe mechanical damage. Computational methods for predicting thermoacoustic instability, fast and accurate enough to be used as part of the industrial design process, are urgently needed.
The only computational methods with the prospect of being fast enough are those based on coupled treatment of the acoustic waves and unsteady combustion. These exploit the amenity of the acoustic waves to analytical modelling, allowing costly simulations to be directed only at the more complex flame. They show real promise: my group recently demonstrated the first accurate coupled predictions for lab-scale combustors. The method does not yet extend to industrial combustors, the more complex flow-fields in these rendering current acoustic models overly-simplistic. I propose to comprehensively overhaul acoustic models across the entirety of the combustor, accounting for real and important acoustic-flow interactions. These new models will offer the breakthrough prospect of extending efficient, accurate predictive capability to industrial combustors, which has a real chance of facilitating future, instability free, very low NOx gas turbines.

Gas turbines are an essential ingredient in the long-term energy and aviation mix. They are flexible, offer fast start-up and the ability to burn renewable-generated fuels. However, they generate NOx emissions, which cause air pollution and damage human health, and reducing these is an air quality imperative. A major hurdle to this is that lean premixed combustion, essential for further NOx emission reductions, is highly susceptible to thermoacoustic instability. This is caused by a two-way coupling between unsteady combustion and acoustic waves, and the resulting large pressure oscillations can cause severe mechanical damage. Computational methods for predicting thermoacoustic instability, fast and accurate enough to be used as part of the industrial design process, are urgently needed.
The only computational methods with the prospect of being fast enough are those based on coupled treatment of the acoustic waves and unsteady combustion. These exploit the amenity of the acoustic waves to analytical modelling, allowing costly simulations to be directed only at the more complex flame. They show real promise: my group recently demonstrated the first accurate coupled predictions for lab-scale combustors. The method does not yet extend to industrial combustors, the more complex flow-fields in these rendering current acoustic models overly-simplistic. I propose to comprehensively overhaul acoustic models across the entirety of the combustor, accounting for real and important acoustic-flow interactions. These new models will offer the breakthrough prospect of extending efficient, accurate predictive capability to industrial combustors, which has a real chance of facilitating future, instability free, very low NOx gas turbines.

Max ERC Funding

1 985 288 €

Duration

Start date: 2018-06-01, End date: 2023-05-31

Project acronymAgeConsolidate

ProjectThe Missing Link of Episodic Memory Decline in Aging: The Role of Inefficient Systems Consolidation

Researcher (PI)Anders Martin FJELL

Host Institution (HI)UNIVERSITETET I OSLO

Call DetailsConsolidator Grant (CoG), SH4, ERC-2016-COG

SummaryWhich brain mechanisms are responsible for the faith of the memories we make with age, whether they wither or stay, and in what form? Episodic memory function does decline with age. While this decline can have multiple causes, research has focused almost entirely on encoding and retrieval processes, largely ignoring a third critical process– consolidation. The objective of AgeConsolidate is to provide this missing link, by combining novel experimental cognitive paradigms with neuroimaging in a longitudinal large-scale attempt to directly test how age-related changes in consolidation processes in the brain impact episodic memory decline. The ambitious aims of the present proposal are two-fold:
(1) Use recent advances in memory consolidation theory to achieve an elaborate model of episodic memory deficits in aging
(2) Use aging as a model to uncover how structural and functional brain changes affect episodic memory consolidation in general
The novelty of the project lies in the synthesis of recent methodological advances and theoretical models for episodic memory consolidation to explain age-related decline, by employing a unique combination of a range of different techniques and approaches. This is ground-breaking, in that it aims at taking our understanding of the brain processes underlying episodic memory decline in aging to a new level, while at the same time advancing our theoretical understanding of how episodic memories are consolidated in the human brain. To obtain this outcome, I will test the main hypothesis of the project: Brain processes of episodic memory consolidation are less effective in older adults, and this can account for a significant portion of the episodic memory decline in aging. This will be answered by six secondary hypotheses, with 1-3 experiments or tasks designated to address each hypothesis, focusing on functional and structural MRI, positron emission tomography data and sleep experiments to target consolidation from different angles.

Which brain mechanisms are responsible for the faith of the memories we make with age, whether they wither or stay, and in what form? Episodic memory function does decline with age. While this decline can have multiple causes, research has focused almost entirely on encoding and retrieval processes, largely ignoring a third critical process– consolidation. The objective of AgeConsolidate is to provide this missing link, by combining novel experimental cognitive paradigms with neuroimaging in a longitudinal large-scale attempt to directly test how age-related changes in consolidation processes in the brain impact episodic memory decline. The ambitious aims of the present proposal are two-fold:
(1) Use recent advances in memory consolidation theory to achieve an elaborate model of episodic memory deficits in aging
(2) Use aging as a model to uncover how structural and functional brain changes affect episodic memory consolidation in general
The novelty of the project lies in the synthesis of recent methodological advances and theoretical models for episodic memory consolidation to explain age-related decline, by employing a unique combination of a range of different techniques and approaches. This is ground-breaking, in that it aims at taking our understanding of the brain processes underlying episodic memory decline in aging to a new level, while at the same time advancing our theoretical understanding of how episodic memories are consolidated in the human brain. To obtain this outcome, I will test the main hypothesis of the project: Brain processes of episodic memory consolidation are less effective in older adults, and this can account for a significant portion of the episodic memory decline in aging. This will be answered by six secondary hypotheses, with 1-3 experiments or tasks designated to address each hypothesis, focusing on functional and structural MRI, positron emission tomography data and sleep experiments to target consolidation from different angles.

SummaryThe goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.

The goal of the AI4REASON project is a breakthrough in what is considered a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories. Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies on which today's information society critically depends.
It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently demonstrated that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The breakthrough will be achieved by developing such novel AI methods. First, we will devise suitable Automated Reasoning and Machine Learning methods that learn reasoning knowledge and steer the reasoning processes at various levels of granularity. Second, we will combine them into autonomous self-improving AI systems that interleave deduction and learning in positive feedback loops. Third, we will develop approaches that aggregate reasoning knowledge across many formal, semi-formal and informal corpora and deploy the methods as strong automation services for the formal proof community.
The expected outcome is our ability to prove automatically at least 50% more theorems in high-assurance projects such as Flyspeck and seL4, bringing a major breakthrough in formal reasoning and verification. As an AI effort, the project offers a unique path to large-scale semantic AI. The formal corpora concentrate centuries of deep human thinking in a computer-understandable form on which deductive and inductive AI can be combined and co-evolved, providing new insights into how humans do mathematics and science.

SummaryThe AlchemEast project is devoted to the study of alchemical theory and practice as it appeared and developed in distinct, albeit contiguous (both chronologically and geographically) areas: Graeco-Roman Egypt, Byzantium, and the Near East, from Ancient Babylonian times to the early Islamic Period. This project combines innovative textual investigations with experimental replications of ancient alchemical procedures. It uses sets of historically and philologically informed laboratory replications in order to reconstruct the actual practice of ancient alchemists, and it studies the texts and literary forms in which this practice was conceptualized and transmitted. It proposes new models for textual criticism in order to capture the fluidity of the transmission of ancient alchemical writings. AlchemEast is designed to carry out a comparative investigation of cuneiform tablets as well as a vast corpus of Greek, Syriac and Arabic writings. It will overcome the old, pejorative paradigm that dismissed ancient alchemy as a "pseudo-science", by proposing a new theoretical framework for comprehending the entirety of ancient alchemical practices and theories. Alongside established forms of scholarly output, such as critical editions of key texts, AlchemEast will provide an integrative, longue durée perspective on the many different phases of ancient alchemy. It will thus offer a radically new vision of this discipline as a dynamic and diversified art that developed across different technical and scholastic traditions. This new representation will allow us to connect ancient alchemy with medieval and early modern alchemy and thus fully reintegrate ancient alchemy in the history of pre-modern alchemy as well as in the history of ancient science more broadly.

The AlchemEast project is devoted to the study of alchemical theory and practice as it appeared and developed in distinct, albeit contiguous (both chronologically and geographically) areas: Graeco-Roman Egypt, Byzantium, and the Near East, from Ancient Babylonian times to the early Islamic Period. This project combines innovative textual investigations with experimental replications of ancient alchemical procedures. It uses sets of historically and philologically informed laboratory replications in order to reconstruct the actual practice of ancient alchemists, and it studies the texts and literary forms in which this practice was conceptualized and transmitted. It proposes new models for textual criticism in order to capture the fluidity of the transmission of ancient alchemical writings. AlchemEast is designed to carry out a comparative investigation of cuneiform tablets as well as a vast corpus of Greek, Syriac and Arabic writings. It will overcome the old, pejorative paradigm that dismissed ancient alchemy as a "pseudo-science", by proposing a new theoretical framework for comprehending the entirety of ancient alchemical practices and theories. Alongside established forms of scholarly output, such as critical editions of key texts, AlchemEast will provide an integrative, longue durée perspective on the many different phases of ancient alchemy. It will thus offer a radically new vision of this discipline as a dynamic and diversified art that developed across different technical and scholastic traditions. This new representation will allow us to connect ancient alchemy with medieval and early modern alchemy and thus fully reintegrate ancient alchemy in the history of pre-modern alchemy as well as in the history of ancient science more broadly.

SummaryTwo-dimensional transition metal dichalcogenides (2D-TMDs) are an exciting class of new materials. Their ultrathin body, optical band gap and unusual spin and valley polarization physics make them very promising candidates for a vast new range of (opto-)electronic applications. So far, most experimental work on 2D-TMDs has been performed on exfoliated flakes made by the ‘Scotch tape’ technique. The major next challenge is the large-area synthesis of 2D-TMDs by a technique that ultimately can be used for commercial device fabrication.
Building upon pure 2D-TMDs, even more functionalities can be gained from 2D-TMD alloys and heterostructures. Theoretical work on these derivates reveals exciting new phenomena, but experimentally this field is largely unexplored due to synthesis technique limitations.
The goal of this proposal is to combine atomic layer deposition with plasma chemistry to create a novel surface-controlled, industry-compatible synthesis technique that will make large area 2D-TMDs, 2D-TMD alloys and 2D-TMD heterostructures a reality. This innovative approach will enable systematic layer dependent studies, likely revealing exciting new properties, and provide integration pathways for a multitude of applications.
Atomistic simulations will guide the process development and, together with in- and ex-situ analysis, increase the understanding of the surface chemistry involved. State-of-the-art high resolution transmission electron microscopy will be used to study the alloying process and the formation of heterostructures. Luminescence spectroscopy and electrical characterization will reveal the potential of the synthesized materials for (opto)-electronic applications.
The synergy between the excellent background of the PI in 2D materials for nanoelectronics and the group’s leading expertise in ALD and plasma science is unique and provides an ideal stepping stone to develop the synthesis of large-area 2D-TMDs and derivatives.

Two-dimensional transition metal dichalcogenides (2D-TMDs) are an exciting class of new materials. Their ultrathin body, optical band gap and unusual spin and valley polarization physics make them very promising candidates for a vast new range of (opto-)electronic applications. So far, most experimental work on 2D-TMDs has been performed on exfoliated flakes made by the ‘Scotch tape’ technique. The major next challenge is the large-area synthesis of 2D-TMDs by a technique that ultimately can be used for commercial device fabrication.
Building upon pure 2D-TMDs, even more functionalities can be gained from 2D-TMD alloys and heterostructures. Theoretical work on these derivates reveals exciting new phenomena, but experimentally this field is largely unexplored due to synthesis technique limitations.
The goal of this proposal is to combine atomic layer deposition with plasma chemistry to create a novel surface-controlled, industry-compatible synthesis technique that will make large area 2D-TMDs, 2D-TMD alloys and 2D-TMD heterostructures a reality. This innovative approach will enable systematic layer dependent studies, likely revealing exciting new properties, and provide integration pathways for a multitude of applications.
Atomistic simulations will guide the process development and, together with in- and ex-situ analysis, increase the understanding of the surface chemistry involved. State-of-the-art high resolution transmission electron microscopy will be used to study the alloying process and the formation of heterostructures. Luminescence spectroscopy and electrical characterization will reveal the potential of the synthesized materials for (opto)-electronic applications.
The synergy between the excellent background of the PI in 2D materials for nanoelectronics and the group’s leading expertise in ALD and plasma science is unique and provides an ideal stepping stone to develop the synthesis of large-area 2D-TMDs and derivatives.

Max ERC Funding

1 968 709 €

Duration

Start date: 2015-08-01, End date: 2020-07-31

Project acronymALERT

ProjectALERT - The Apertif-LOFAR Exploration of the Radio Transient Sky

Summary"In our largely unchanging radio Universe, a highly dynamic component was recently discovered: flashes of bright radio emission that last only milliseconds but appear all over the sky. Some of these radio bursts can be traced to intermittently pulsating neutron stars. Other bursts however, apparently originate far outside our Galaxy. Due to great observational challenges, the evolution of the neutron stars is not understood, while more importantly, the nature of the extragalactic bursts remains an outright mystery.
My overall aim is to understand the physics that drives both kinds of brief and luminous bursts.
My primary goal is to identify the highly compact astrophysical explosions powering the extragalactic bursts. My previous surveys are the state of the art in fast-transient detection; I will now increase by a factor of 10 this exploration volume. In real-time I will provide arcsec positions, 10,000-fold more accurate than currently possible, to localize such extragalactic bursts for the first time and understand their origin.
My secondary goal is to unravel the unexplained evolution of intermittently pulsating neutron stars (building on e.g., my recent papers in Science, 2013), by doubling their number and modeling their population.
To achieve these goals, I will carry out a highly innovative survey: the Apertif-LOFAR Exploration of the Radio Transient Sky. ALERT is over an order of magnitude more sensitive than all current state-of-the art fast-transient surveys.
Through its novel, extremely wide field-of-view, Westerbork/Apertif will detect many tens of extragalactic bursts. Through real-time triggers to LOFAR I will next provide the precise localisation that is essential for radio, optical and high-energy follow-up to, for the first time, shed light on the physics and objects driving these bursts – evaporating primordial black holes; explosions in host galaxies; or, the unknown?"

"In our largely unchanging radio Universe, a highly dynamic component was recently discovered: flashes of bright radio emission that last only milliseconds but appear all over the sky. Some of these radio bursts can be traced to intermittently pulsating neutron stars. Other bursts however, apparently originate far outside our Galaxy. Due to great observational challenges, the evolution of the neutron stars is not understood, while more importantly, the nature of the extragalactic bursts remains an outright mystery.
My overall aim is to understand the physics that drives both kinds of brief and luminous bursts.
My primary goal is to identify the highly compact astrophysical explosions powering the extragalactic bursts. My previous surveys are the state of the art in fast-transient detection; I will now increase by a factor of 10 this exploration volume. In real-time I will provide arcsec positions, 10,000-fold more accurate than currently possible, to localize such extragalactic bursts for the first time and understand their origin.
My secondary goal is to unravel the unexplained evolution of intermittently pulsating neutron stars (building on e.g., my recent papers in Science, 2013), by doubling their number and modeling their population.
To achieve these goals, I will carry out a highly innovative survey: the Apertif-LOFAR Exploration of the Radio Transient Sky. ALERT is over an order of magnitude more sensitive than all current state-of-the art fast-transient surveys.
Through its novel, extremely wide field-of-view, Westerbork/Apertif will detect many tens of extragalactic bursts. Through real-time triggers to LOFAR I will next provide the precise localisation that is essential for radio, optical and high-energy follow-up to, for the first time, shed light on the physics and objects driving these bursts – evaporating primordial black holes; explosions in host galaxies; or, the unknown?"

SummaryAlfonsine astronomy is arguably among the first European scientific achievements. It shaped a scene for actors like Regiomontanus or Copernicus. There is however little detailed historical analysis encompassing its development in its full breadth. ALFA addresses this issue by studying tables, instruments, mathematical and theoretical texts in a methodologically innovative way relying on approaches from the history of manuscript cultures, history of mathematics, and history of astronomy.
ALFA integrates these approaches not only to benefit from different perspectives but also to build new questions from their interactions. For instance the analysis of mathematical practices in astral sciences manuscripts induces new ways to analyse the documents and to think about astronomical questions.
Relying on these approaches the main objectives of ALFA are thus to:
- Retrace the development of the corpus of Alfonsine texts from its origin in the second half of the 13th century to the end of the 15th century by following, on the manuscript level, the milieus fostering it;
- Analyse the Alfonsine astronomers’ practices, their relations to mathematics, to the natural world, to proofs and justification, their intellectual context and audiences;
- Build a meaningful narrative showing how astronomers in different milieus with diverse practices shaped, also from Arabic materials, an original scientific scene in Europe.
ALFA will shed new light on the intellectual history of the late medieval period as a whole and produce a better understanding of its relations to related scientific periods in Europe and beyond. It will also produce methodological breakthroughs impacting the ways history of knowledge is practiced outside the field of ancient and medieval sciences. Efforts will be devoted to bring these results not only to the relevant scholarly communities but also to a wider audience as a resource in the public debates around science, knowledge and culture.

Alfonsine astronomy is arguably among the first European scientific achievements. It shaped a scene for actors like Regiomontanus or Copernicus. There is however little detailed historical analysis encompassing its development in its full breadth. ALFA addresses this issue by studying tables, instruments, mathematical and theoretical texts in a methodologically innovative way relying on approaches from the history of manuscript cultures, history of mathematics, and history of astronomy.
ALFA integrates these approaches not only to benefit from different perspectives but also to build new questions from their interactions. For instance the analysis of mathematical practices in astral sciences manuscripts induces new ways to analyse the documents and to think about astronomical questions.
Relying on these approaches the main objectives of ALFA are thus to:
- Retrace the development of the corpus of Alfonsine texts from its origin in the second half of the 13th century to the end of the 15th century by following, on the manuscript level, the milieus fostering it;
- Analyse the Alfonsine astronomers’ practices, their relations to mathematics, to the natural world, to proofs and justification, their intellectual context and audiences;
- Build a meaningful narrative showing how astronomers in different milieus with diverse practices shaped, also from Arabic materials, an original scientific scene in Europe.
ALFA will shed new light on the intellectual history of the late medieval period as a whole and produce a better understanding of its relations to related scientific periods in Europe and beyond. It will also produce methodological breakthroughs impacting the ways history of knowledge is practiced outside the field of ancient and medieval sciences. Efforts will be devoted to bring these results not only to the relevant scholarly communities but also to a wider audience as a resource in the public debates around science, knowledge and culture.

Max ERC Funding

1 871 250 €

Duration

Start date: 2017-09-01, End date: 2022-08-31

Project acronymAlgoFinance

ProjectAlgorithmic Finance: Inquiring into the Reshaping of Financial Markets

Researcher (PI)Christian BORCH

Host Institution (HI)COPENHAGEN BUSINESS SCHOOL

Call DetailsConsolidator Grant (CoG), SH3, ERC-2016-COG

SummaryPresent-day financial markets are turning algorithmic, as market orders are increasingly being executed by fully automated computer algorithms, without any direct human intervention. Although algorithmic finance seems to fundamentally reshape the central dynamics in financial markets, and even though it prompts core sociological questions, it has not yet received any systematic attention. In a pioneering contribution to economic sociology and social studies of finance, ALGOFINANCE aims to understand how and with what consequences the turn to algorithms is changing financial markets. The overall concept and central contributions of ALGOFINANCE are the following: (1) on an intra-firm level, the project examines how the shift to algorithmic finance reshapes the ways in which trading firms operate, and does so by systematically and empirically investigating the reconfiguration of organizational structures and employee subjectivity; (2) on an inter-algorithmic level, it offers a ground-breaking methodology (agent-based modelling informed by qualitative data) to grasp how trading algorithms interact with one another in a fully digital space; and (3) on the level of market sociality, it proposes a novel theorization of how intra-firm and inter-algorithmic dynamics can be conceived of as introducing a particular form of sociality that is characteristic to algorithmic finance: a form of sociality-as-association heuristically analyzed as imitation. None of these three levels have received systematic attention in the state-of-the-art literature. Addressing them will significantly advance the understanding of present-day algorithmic finance in economic sociology. By contributing novel empirical, methodological, and theoretical understandings of the functioning and consequences of algorithms, ALGOFINANCE will pave the way for other research into digital sociology and the broader algorithmization of society.

Present-day financial markets are turning algorithmic, as market orders are increasingly being executed by fully automated computer algorithms, without any direct human intervention. Although algorithmic finance seems to fundamentally reshape the central dynamics in financial markets, and even though it prompts core sociological questions, it has not yet received any systematic attention. In a pioneering contribution to economic sociology and social studies of finance, ALGOFINANCE aims to understand how and with what consequences the turn to algorithms is changing financial markets. The overall concept and central contributions of ALGOFINANCE are the following: (1) on an intra-firm level, the project examines how the shift to algorithmic finance reshapes the ways in which trading firms operate, and does so by systematically and empirically investigating the reconfiguration of organizational structures and employee subjectivity; (2) on an inter-algorithmic level, it offers a ground-breaking methodology (agent-based modelling informed by qualitative data) to grasp how trading algorithms interact with one another in a fully digital space; and (3) on the level of market sociality, it proposes a novel theorization of how intra-firm and inter-algorithmic dynamics can be conceived of as introducing a particular form of sociality that is characteristic to algorithmic finance: a form of sociality-as-association heuristically analyzed as imitation. None of these three levels have received systematic attention in the state-of-the-art literature. Addressing them will significantly advance the understanding of present-day algorithmic finance in economic sociology. By contributing novel empirical, methodological, and theoretical understandings of the functioning and consequences of algorithms, ALGOFINANCE will pave the way for other research into digital sociology and the broader algorithmization of society.