SummaryComputing is changing from living on our desktops and in dedicated devices to being everywhere. In phones, sensors, appliances, and robots – computers (from now on devices) are everywhere and affecting all aspects of our lives. The techniques to make them safe and reliable are investigated and are starting to emerge and consolidate. However, these techniques enable devices to work in isolation or co-exist. We currently do not have techniques that enable development of real autonomous collaboration between devices. Such techniques will revolutionize all usage of devices and, as consequence, our lives. Manufacturing, supply chain, transportation, infrastructures, and earth- and space exploration would all transform using techniques that enable development of collaborating devices.
When considering isolated (and co-existing) devices, reactive synthesis – automatic production of plans from high level specification – is emerging as a viable tool for the development of robots and reactive software. This is especially important in the context of safety-critical systems, where assurances are required and systems need to have guarantees on performance. The techniques that are developed today to support robust, assured, reliable, and adaptive devices rely on a major change in focus of reactive synthesis. The revolution of correct-by-construction systems from specifications is occurring and is being pushed forward.
However, to take this approach forward to work also for real collaboration between devices the theoretical frameworks that will enable distributed synthesis are required. Such foundations will enable the correct-by-construction revolution to unleash its potential and allow a multiplicative increase of utility by cooperative computation.
d-SynMA will take distributed synthesis to this new frontier by considering novel interaction and communication concepts that would create an adaptable framework of correct-by-construction application of collaborating devices.

Computing is changing from living on our desktops and in dedicated devices to being everywhere. In phones, sensors, appliances, and robots – computers (from now on devices) are everywhere and affecting all aspects of our lives. The techniques to make them safe and reliable are investigated and are starting to emerge and consolidate. However, these techniques enable devices to work in isolation or co-exist. We currently do not have techniques that enable development of real autonomous collaboration between devices. Such techniques will revolutionize all usage of devices and, as consequence, our lives. Manufacturing, supply chain, transportation, infrastructures, and earth- and space exploration would all transform using techniques that enable development of collaborating devices.
When considering isolated (and co-existing) devices, reactive synthesis – automatic production of plans from high level specification – is emerging as a viable tool for the development of robots and reactive software. This is especially important in the context of safety-critical systems, where assurances are required and systems need to have guarantees on performance. The techniques that are developed today to support robust, assured, reliable, and adaptive devices rely on a major change in focus of reactive synthesis. The revolution of correct-by-construction systems from specifications is occurring and is being pushed forward.
However, to take this approach forward to work also for real collaboration between devices the theoretical frameworks that will enable distributed synthesis are required. Such foundations will enable the correct-by-construction revolution to unleash its potential and allow a multiplicative increase of utility by cooperative computation.
d-SynMA will take distributed synthesis to this new frontier by considering novel interaction and communication concepts that would create an adaptable framework of correct-by-construction application of collaborating devices.

Max ERC Funding

1 871 272 €

Duration

Start date: 2018-05-01, End date: 2023-04-30

Project acronymNUCLEARWATERS

ProjectPutting Water at the Centre of Nuclear Energy History

Researcher (PI)Per HÖGSELIUS

Host Institution (HI)KUNGLIGA TEKNISKA HOEGSKOLAN

Call DetailsConsolidator Grant (CoG), SH6, ERC-2017-COG

SummaryNUCLEARWATERS develops a groundbreaking new approach to studying the history of nuclear energy. Rather than interpreting nuclear energy history as a history of nuclear physics and radiochemistry, it analyses it as a history of water. The project develops the argument that nuclear energy is in essence a hydraulic form of technology, and that it as such builds on centuries and even millennia of earlier hydraulic engineering efforts worldwide – and, culturally speaking, on earlier “hydraulic civilizations”, from ancient Egypt to the modern Netherlands. I investigate how historical water-manipulating technologies and wet and dry risk conceptions from a deeper past were carried on into the nuclear age. These risk conceptions brought with them a complex set of social and professional practices that displayed considerable inertia and were difficult to change – sometimes paving the way for disaster. Against this background I hypothesize that a water-centred nuclear energy history enables us to resolve a number of the key riddles in nuclear energy history and to grasp the deeper historical logic behind various nuclear disasters and accidents worldwide. The project is structured along six work packages that problematize the centrality – and dilemma – of water in nuclear energy history from different thematic and geographical angles. These include in-depth studies of the transnational nuclear-hydraulic engineering community, of the Soviet Union’s nuclear waters, of the Rhine Valley as a transnational and heavily nuclearized river basin, of Japan’s atomic coastscapes and of the ecologically and politically fragile Baltic Sea region. The ultimate ambition is to significantly revise nuclear energy history as we know it – with implications not only for the history of technology as an academic field (and its relationship with environmental history), but also for the public debate about nuclear energy’s future in Europe and beyond.

NUCLEARWATERS develops a groundbreaking new approach to studying the history of nuclear energy. Rather than interpreting nuclear energy history as a history of nuclear physics and radiochemistry, it analyses it as a history of water. The project develops the argument that nuclear energy is in essence a hydraulic form of technology, and that it as such builds on centuries and even millennia of earlier hydraulic engineering efforts worldwide – and, culturally speaking, on earlier “hydraulic civilizations”, from ancient Egypt to the modern Netherlands. I investigate how historical water-manipulating technologies and wet and dry risk conceptions from a deeper past were carried on into the nuclear age. These risk conceptions brought with them a complex set of social and professional practices that displayed considerable inertia and were difficult to change – sometimes paving the way for disaster. Against this background I hypothesize that a water-centred nuclear energy history enables us to resolve a number of the key riddles in nuclear energy history and to grasp the deeper historical logic behind various nuclear disasters and accidents worldwide. The project is structured along six work packages that problematize the centrality – and dilemma – of water in nuclear energy history from different thematic and geographical angles. These include in-depth studies of the transnational nuclear-hydraulic engineering community, of the Soviet Union’s nuclear waters, of the Rhine Valley as a transnational and heavily nuclearized river basin, of Japan’s atomic coastscapes and of the ecologically and politically fragile Baltic Sea region. The ultimate ambition is to significantly revise nuclear energy history as we know it – with implications not only for the history of technology as an academic field (and its relationship with environmental history), but also for the public debate about nuclear energy’s future in Europe and beyond.

Max ERC Funding

1 991 008 €

Duration

Start date: 2018-05-01, End date: 2023-04-30

Project acronymSEED

ProjectLearning to See in a Dynamic World

Researcher (PI)Cristian Sminchisescu

Host Institution (HI)LUNDS UNIVERSITET

Call DetailsConsolidator Grant (CoG), PE6, ERC-2014-CoG

SummaryThe goal of SEED is to fundamentally advance the methodology of computer vision by exploiting a dynamic analysis perspective in order to acquire accurate, yet tractable models, that can automatically learn to sense our visual world, localize still and animate objects (e.g. chairs, phones, computers, bicycles or cars, people and animals), actions and interactions, as well as qualitative geometrical and physical scene properties, by propagating and consolidating temporal information, with minimal system training and supervision. SEED will extract descriptions that identify the precise boundaries and spatial layout of the different scene components, and the manner they move, interact, and change over time. For this purpose, SEED will develop novel high-order compositional methodologies for the semantic segmentation of video data acquired by observers of dynamic scenes, by adaptively integrating figure-ground reasoning based on bottom-up and top-down information, and by using weakly supervised machine learning techniques that support continuous learning towards an open-ended number of visual categories. The system will be able not only to recover detailed models of dynamic scenes, but also forecast future actions and interactions in those scenes, over long time horizons, by contextual reasoning and inverse reinforcement learning. Two demonstrators are envisaged, the first corresponding to scene understanding and forecasting in indoor office spaces, and the second for urban outdoor environments. The methodology emerging from this research has the potential to impact fields as diverse as automatic personal assistance for people, video editing and indexing, robotics, environmental awareness, augmented reality, human-computer interaction, or manufacturing.

The goal of SEED is to fundamentally advance the methodology of computer vision by exploiting a dynamic analysis perspective in order to acquire accurate, yet tractable models, that can automatically learn to sense our visual world, localize still and animate objects (e.g. chairs, phones, computers, bicycles or cars, people and animals), actions and interactions, as well as qualitative geometrical and physical scene properties, by propagating and consolidating temporal information, with minimal system training and supervision. SEED will extract descriptions that identify the precise boundaries and spatial layout of the different scene components, and the manner they move, interact, and change over time. For this purpose, SEED will develop novel high-order compositional methodologies for the semantic segmentation of video data acquired by observers of dynamic scenes, by adaptively integrating figure-ground reasoning based on bottom-up and top-down information, and by using weakly supervised machine learning techniques that support continuous learning towards an open-ended number of visual categories. The system will be able not only to recover detailed models of dynamic scenes, but also forecast future actions and interactions in those scenes, over long time horizons, by contextual reasoning and inverse reinforcement learning. Two demonstrators are envisaged, the first corresponding to scene understanding and forecasting in indoor office spaces, and the second for urban outdoor environments. The methodology emerging from this research has the potential to impact fields as diverse as automatic personal assistance for people, video editing and indexing, robotics, environmental awareness, augmented reality, human-computer interaction, or manufacturing.

SummaryDigital imaging of tissue samples and genetic analysis by next generation sequencing are two rapidly emerging fields in pathology. The exponential growth in digital imaging in pathology is catalyzed by more advanced imaging hardware, comparable to the complete shift from analog to digital images that took place in radiology a couple of decades ago: Entire glass slides can be digitized at near the optical resolution limits in only a few minutes’ time, and fluorescence as well as bright field stains can be imaged in parallel.
Genetic analysis, and particularly transcriptomics, is rapidly evolving thanks to the impressive development of next generation sequencing technologies, enabling genome-wide single-cell analysis of DNA and RNA in thousands of cells at constantly decreasing costs. However, most of today’s available technologies result in a genetic analysis that is decoupled from the morphological and spatial information of the original tissue sample, while many important questions in tumor- and developmental biology require single cell spatial resolution to understand tissue heterogeneity.
The goal of the proposed project is to develop computational methods that bridge these two emerging fields. We want to combine spatially resolved high-throughput genomics analysis of tissue sections with digital image analysis of tissue morphology. Together with collaborators from the biomedical field, we propose two approaches for spatially resolved genomics; one based on sequencing mRNA transcripts directly in tissue samples, and one based on spatially resolved cellular barcoding followed by single cell sequencing. Both approaches require development of advanced digital image processing methods. Thus, we will couple genetic analysis with digital pathology. Going beyond visual assessment of this rich digital data will be a fundamental component for the future development of histopathology, both as a diagnostic tool and as a research field.

Digital imaging of tissue samples and genetic analysis by next generation sequencing are two rapidly emerging fields in pathology. The exponential growth in digital imaging in pathology is catalyzed by more advanced imaging hardware, comparable to the complete shift from analog to digital images that took place in radiology a couple of decades ago: Entire glass slides can be digitized at near the optical resolution limits in only a few minutes’ time, and fluorescence as well as bright field stains can be imaged in parallel.
Genetic analysis, and particularly transcriptomics, is rapidly evolving thanks to the impressive development of next generation sequencing technologies, enabling genome-wide single-cell analysis of DNA and RNA in thousands of cells at constantly decreasing costs. However, most of today’s available technologies result in a genetic analysis that is decoupled from the morphological and spatial information of the original tissue sample, while many important questions in tumor- and developmental biology require single cell spatial resolution to understand tissue heterogeneity.
The goal of the proposed project is to develop computational methods that bridge these two emerging fields. We want to combine spatially resolved high-throughput genomics analysis of tissue sections with digital image analysis of tissue morphology. Together with collaborators from the biomedical field, we propose two approaches for spatially resolved genomics; one based on sequencing mRNA transcripts directly in tissue samples, and one based on spatially resolved cellular barcoding followed by single cell sequencing. Both approaches require development of advanced digital image processing methods. Thus, we will couple genetic analysis with digital pathology. Going beyond visual assessment of this rich digital data will be a fundamental component for the future development of histopathology, both as a diagnostic tool and as a research field.

Max ERC Funding

1 738 690 €

Duration

Start date: 2016-04-01, End date: 2021-03-31

Project acronymULTRA

ProjectIncreasing the Spatial Correlation of Logical Units of Data to Enable an Ultra-Low Latency Internet

Researcher (PI)Dejan Manojlo KOSTIC

Host Institution (HI)KUNGLIGA TEKNISKA HOEGSKOLAN

Call DetailsConsolidator Grant (CoG), PE6, ERC-2017-COG

SummaryThe cloud computing infrastructure that logically centralizes data storage and computation for many different actors is a prime example of a key societal system. A number of time-critical applications deployed in the cloud infrastructure have to provide high reliability and throughput, along with guaranteed low latency for delivering data. This low latency guarantee is sorely lacking today, with the so-called tail-latency of slowest responses in popular cloud services being several orders of magnitude longer than the median response times. Unfortunately, simply using a network with ample bandwidth does not guarantee low latency because of problems with congestion at the intra-and inter-data center levels and server overloads. All of these problems currently render the existing cloud infrastructures unsuitable for time-critical societal applications. The reasons for unpredictable delays across the Internet and within the cloud infrastructure are numerous, but some of the key culprits are: 1) slow memory subsystems limit server effectiveness, and 2) excess buffering in the Internet further limits correlation of data requests.
The aim of this project is to dramatically change the way data flows across the Internet, such that it is more highly correlated when it is to be processed at the servers. The overarching goal is to enforce a large degree of correlation in the data requests (logical units of data), both temporally (across time) and spatially (as server work units require correlation to achieve high cache hit rates). The result is that the logical units of data will be processed at almost the maximum processing speed of the cloud servers. By doing so, we will achieve an ultra-low latency Internet. This project will produce the tools and knowledge that will be key to dramatically reducing the latency of key societal services; these include cloud services used by a large number of users on a daily basis.

The cloud computing infrastructure that logically centralizes data storage and computation for many different actors is a prime example of a key societal system. A number of time-critical applications deployed in the cloud infrastructure have to provide high reliability and throughput, along with guaranteed low latency for delivering data. This low latency guarantee is sorely lacking today, with the so-called tail-latency of slowest responses in popular cloud services being several orders of magnitude longer than the median response times. Unfortunately, simply using a network with ample bandwidth does not guarantee low latency because of problems with congestion at the intra-and inter-data center levels and server overloads. All of these problems currently render the existing cloud infrastructures unsuitable for time-critical societal applications. The reasons for unpredictable delays across the Internet and within the cloud infrastructure are numerous, but some of the key culprits are: 1) slow memory subsystems limit server effectiveness, and 2) excess buffering in the Internet further limits correlation of data requests.
The aim of this project is to dramatically change the way data flows across the Internet, such that it is more highly correlated when it is to be processed at the servers. The overarching goal is to enforce a large degree of correlation in the data requests (logical units of data), both temporally (across time) and spatially (as server work units require correlation to achieve high cache hit rates). The result is that the logical units of data will be processed at almost the maximum processing speed of the cloud servers. By doing so, we will achieve an ultra-low latency Internet. This project will produce the tools and knowledge that will be key to dramatically reducing the latency of key societal services; these include cloud services used by a large number of users on a daily basis.