SummaryOnline markets currently form an important share of the global economy. The Internet hosts classical markets (real-estate, stocks, e-commerce) as well allowing new markets with previously unknown features (web-based advertisement, viral marketing, digital goods, crowdsourcing, sharing economy). Algorithms play a central role in many decision processes involved in online markets. For example, algorithms run electronic auctions, trade stocks, adjusts prices dynamically, and harvest big data to provide economic information. Thus, it is of paramount importance to understand the algorithmic and mechanism design foundations of online markets.
The algorithmic research issues that we consider involve algorithmic mechanism design, online and approximation algorithms, modelling uncertainty in online market design, and large-scale data analysisonline and approximation algorithms, large-scale optimization and data mining. The aim of this research project is to combine these fields to consider research questions that are central for today's Internet economy. We plan to apply these techniques so as to solve fundamental algorithmic problems motivated by web-basedInternet advertisement, Internet market designsharing economy, and crowdsourcingonline labour marketplaces. While my planned research is focussedcentered on foundational work with rigorous design and analysis of in algorithms and mechanismsic design and analysis, it will also include as an important component empirical validation on large-scale real-life datasets.

Online markets currently form an important share of the global economy. The Internet hosts classical markets (real-estate, stocks, e-commerce) as well allowing new markets with previously unknown features (web-based advertisement, viral marketing, digital goods, crowdsourcing, sharing economy). Algorithms play a central role in many decision processes involved in online markets. For example, algorithms run electronic auctions, trade stocks, adjusts prices dynamically, and harvest big data to provide economic information. Thus, it is of paramount importance to understand the algorithmic and mechanism design foundations of online markets.
The algorithmic research issues that we consider involve algorithmic mechanism design, online and approximation algorithms, modelling uncertainty in online market design, and large-scale data analysisonline and approximation algorithms, large-scale optimization and data mining. The aim of this research project is to combine these fields to consider research questions that are central for today's Internet economy. We plan to apply these techniques so as to solve fundamental algorithmic problems motivated by web-basedInternet advertisement, Internet market designsharing economy, and crowdsourcingonline labour marketplaces. While my planned research is focussedcentered on foundational work with rigorous design and analysis of in algorithms and mechanismsic design and analysis, it will also include as an important component empirical validation on large-scale real-life datasets.

Max ERC Funding

1 780 150 €

Duration

Start date: 2018-07-01, End date: 2023-06-30

Project acronymBIOINOHYB

ProjectSmart Bioinorganic Hybrids for Nanomedicine

Researcher (PI)Cristiana Di Valentin

Host Institution (HI)UNIVERSITA' DEGLI STUDI DI MILANO-BICOCCA

Call DetailsConsolidator Grant (CoG), PE5, ERC-2014-CoG

SummaryThe use of bioinorganic nanohybrids (nanoscaled systems based on an inorganic and a biological component) has already resulted in several innovative medical breakthroughs for drug delivery, therapeutics, imaging, diagnosis and biocompatibility. However, researchers still know relatively little about the structure, function and mechanism of these nanodevices. Theoretical investigations of bioinorganic interfaces are mostly limited to force-field approaches which cannot grasp the details of the physicochemical mechanisms. The BIOINOHYB project proposes to capitalize on recent massively parallelized codes to investigate bioinorganic nanohybrids by advanced quantum chemical methods. This approach will allow to master the chemical and electronic interplay between the bio and the inorganic components in the first part of the project, and the interaction of the hybrid systems with light in the second part. The ultimate goal is to provide the design principles for novel, unconventional assemblies with unprecedented functionalities and strong impact potential in nanomedicine.
More specifically, in this project the traditional metallic nanoparticle will be substituted by emerging semiconducting metal oxide nanostructures with photocatalytic or magnetic properties capable of opening totally new horizons in nanomedicine (e.g. photocatalytic therapy, a new class of contrast agents, magnetically guided drug delivery). Potentially efficient linkers will be screened regarding their ability both to anchor surfaces and to bind biomolecules. Different kinds of biomolecules (from oligopeptides and oligonucleotides to small drugs) will be tethered to the activated surface according to the desired functionality. The key computational challenge, requiring the recourse to more sophisticated methods, will be the investigation of the photo-response to light of the assembled bioinorganic systems, also with specific reference to their labelling with fluorescent markers and contrast agents.

The use of bioinorganic nanohybrids (nanoscaled systems based on an inorganic and a biological component) has already resulted in several innovative medical breakthroughs for drug delivery, therapeutics, imaging, diagnosis and biocompatibility. However, researchers still know relatively little about the structure, function and mechanism of these nanodevices. Theoretical investigations of bioinorganic interfaces are mostly limited to force-field approaches which cannot grasp the details of the physicochemical mechanisms. The BIOINOHYB project proposes to capitalize on recent massively parallelized codes to investigate bioinorganic nanohybrids by advanced quantum chemical methods. This approach will allow to master the chemical and electronic interplay between the bio and the inorganic components in the first part of the project, and the interaction of the hybrid systems with light in the second part. The ultimate goal is to provide the design principles for novel, unconventional assemblies with unprecedented functionalities and strong impact potential in nanomedicine.
More specifically, in this project the traditional metallic nanoparticle will be substituted by emerging semiconducting metal oxide nanostructures with photocatalytic or magnetic properties capable of opening totally new horizons in nanomedicine (e.g. photocatalytic therapy, a new class of contrast agents, magnetically guided drug delivery). Potentially efficient linkers will be screened regarding their ability both to anchor surfaces and to bind biomolecules. Different kinds of biomolecules (from oligopeptides and oligonucleotides to small drugs) will be tethered to the activated surface according to the desired functionality. The key computational challenge, requiring the recourse to more sophisticated methods, will be the investigation of the photo-response to light of the assembled bioinorganic systems, also with specific reference to their labelling with fluorescent markers and contrast agents.

SummaryWe propose the development of novel nanodevices, such as nanoscale bridges and nanovectors, based on functionalized carbon nanotubes (CNT) for manipulating neurons and neuronal network activity in vitro. The main aim is to put forward innovative solutions that have the potential to circumvent the problems currently faced by spinal cord lesions or by neurodegenerative diseases. The unifying theme is to use recent advances in chemistry and nanotechnology to gain insight into the functioning of hybrid neuronal/CNT networks, relevant for the development of novel implantable devices to control neuronal signaling and improve synapse formation in a controlled fashion. The proposal s core strategy is to exploit the expertise of the PI in the chemical control of CNT properties to develop devices reaching various degrees of functional integration with the physiological electrical activity of cells and their networks, and to understand how such global dynamics are orchestrated when integrated by different substrates. An unconventional strategy will be represented by the electrical characterization of micro and nano patterned substrates by AFM and conductive tip AFM, both before and after neurons have grown on the substrates. We will also use the capability of AFM to identify critical positions in the neuronal network, while delivering time-dependent chemical stimulations. We will apply nanotechnology to contemporary neuroscience in the perspective of novel neuro-implantable devices and drug nanovectors, engineered to treat neurological and neurodegenerative lesions. The scientific strategy at the core of the proposal is the convergence between nanotechnology, chemistry and neurobiology. Such convergence, beyond helping understand the functioning and malfunctioning of the brain, can stimulate further research in this area and may ultimately lead to a new generation of nanomedicine applications in neurology and to new opportunities for the health care industry.

We propose the development of novel nanodevices, such as nanoscale bridges and nanovectors, based on functionalized carbon nanotubes (CNT) for manipulating neurons and neuronal network activity in vitro. The main aim is to put forward innovative solutions that have the potential to circumvent the problems currently faced by spinal cord lesions or by neurodegenerative diseases. The unifying theme is to use recent advances in chemistry and nanotechnology to gain insight into the functioning of hybrid neuronal/CNT networks, relevant for the development of novel implantable devices to control neuronal signaling and improve synapse formation in a controlled fashion. The proposal s core strategy is to exploit the expertise of the PI in the chemical control of CNT properties to develop devices reaching various degrees of functional integration with the physiological electrical activity of cells and their networks, and to understand how such global dynamics are orchestrated when integrated by different substrates. An unconventional strategy will be represented by the electrical characterization of micro and nano patterned substrates by AFM and conductive tip AFM, both before and after neurons have grown on the substrates. We will also use the capability of AFM to identify critical positions in the neuronal network, while delivering time-dependent chemical stimulations. We will apply nanotechnology to contemporary neuroscience in the perspective of novel neuro-implantable devices and drug nanovectors, engineered to treat neurological and neurodegenerative lesions. The scientific strategy at the core of the proposal is the convergence between nanotechnology, chemistry and neurobiology. Such convergence, beyond helping understand the functioning and malfunctioning of the brain, can stimulate further research in this area and may ultimately lead to a new generation of nanomedicine applications in neurology and to new opportunities for the health care industry.

Max ERC Funding

2 500 000 €

Duration

Start date: 2009-02-01, End date: 2014-01-31

Project acronymCME

ProjectConcurrency Made Easy

Researcher (PI)Bertrand Philippe Meyer

Host Institution (HI)POLITECNICO DI MILANO

Call DetailsAdvanced Grant (AdG), PE6, ERC-2011-ADG_20110209

SummaryThe “Concurrency Made Easy” project is an attempt to achieve a conceptual breakthrough on the most daunting challenge in information technology today: mastering concurrency. Concurrency, once a specialized technique for experts, is forcing itself onto the entire IT community because of a disruptive phenomenon: the “end of Moore’s law as we know it”. Increases in performance can no longer happen through raw hardware speed, but only through concurrency, as in multicore architectures. Concurrency is also critical for networking, cloud computing and the progress of natural sciences. Software support for these advances lags, mired in concepts from the 1960s such as semaphores. Existing formal models are hard to apply in practice. Incremental progress is not sufficient; neither are techniques that place the burden on programmers, who cannot all be expected to become concurrency experts. The CME project attempts a major shift on the side of the supporting technology: languages, formal models, verification techniques. The core idea of the CME project is to make concurrency easy for programmers, by building on established ideas of modern programming methodology (object technology, Design by Contract) shifting the concurrency difficulties to the internals of the model and implementation.
The project includes the following elements.
1. Sound conceptual model for concurrency. The starting point is the influential previous work of the PI: concepts of object-oriented design, particularly Design by Contract, and the SCOOP concurrency model.
2. Reference implementation, integrated into an IDE.
3. Performance analysis.
4. Theory and formal basis, including full semantics.
5. Proof techniques, compatible with proof techniques for the sequential part.
6. Complementary verification techniques such as concurrent testing.
7. Library of concurrency components and examples.
8. Publication, including a major textbook on concurrency.

The “Concurrency Made Easy” project is an attempt to achieve a conceptual breakthrough on the most daunting challenge in information technology today: mastering concurrency. Concurrency, once a specialized technique for experts, is forcing itself onto the entire IT community because of a disruptive phenomenon: the “end of Moore’s law as we know it”. Increases in performance can no longer happen through raw hardware speed, but only through concurrency, as in multicore architectures. Concurrency is also critical for networking, cloud computing and the progress of natural sciences. Software support for these advances lags, mired in concepts from the 1960s such as semaphores. Existing formal models are hard to apply in practice. Incremental progress is not sufficient; neither are techniques that place the burden on programmers, who cannot all be expected to become concurrency experts. The CME project attempts a major shift on the side of the supporting technology: languages, formal models, verification techniques. The core idea of the CME project is to make concurrency easy for programmers, by building on established ideas of modern programming methodology (object technology, Design by Contract) shifting the concurrency difficulties to the internals of the model and implementation.
The project includes the following elements.
1. Sound conceptual model for concurrency. The starting point is the influential previous work of the PI: concepts of object-oriented design, particularly Design by Contract, and the SCOOP concurrency model.
2. Reference implementation, integrated into an IDE.
3. Performance analysis.
4. Theory and formal basis, including full semantics.
5. Proof techniques, compatible with proof techniques for the sequential part.
6. Complementary verification techniques such as concurrent testing.
7. Library of concurrency components and examples.
8. Publication, including a major textbook on concurrency.

SummaryDeep learning is revolutionizing the field of Natural Language Processing (NLP), with breakthroughs in machine translation, speech recognition, and question answering. New language interfaces (digital assistants, messenger apps, customer service bots) are emerging as the next technologies for seamless, multilingual communication among humans and machines.
From a machine learning perspective, many problems in NLP can be characterized as structured prediction: they involve predicting structurally rich and interdependent output variables. In spite of this, current neural NLP systems ignore the structural complexity of human language, relying on simplistic and error-prone greedy search procedures. This leads to serious mistakes in machine translation, such as words being dropped or named entities mistranslated. More broadly, neural networks are missing the key structural mechanisms for solving complex real-world tasks requiring deep reasoning.
This project attacks these fundamental problems by bringing together deep learning and structured prediction, with a highly disruptive and cross-disciplinary approach. First, I will endow neural networks with a "planning mechanism" to guide structural search, letting decoders learn the optimal order by which they should operate. This makes a bridge with reinforcement learning and combinatorial optimization. Second, I will develop new ways of automatically inducing latent structure inside the network, making it more expressive, scalable and interpretable. Synergies with probabilistic inference and sparse modeling techniques will be exploited. To complement these two innovations, I will investigate new ways of incorporating weak supervision to reduce the need for labeled data.
Three highly challenging applications will serve as testbeds: machine translation, quality estimation, and dependency parsing. To maximize technological impact, a collaboration is planned with a start-up company in the crowd-sourcing translation industry.

Deep learning is revolutionizing the field of Natural Language Processing (NLP), with breakthroughs in machine translation, speech recognition, and question answering. New language interfaces (digital assistants, messenger apps, customer service bots) are emerging as the next technologies for seamless, multilingual communication among humans and machines.
From a machine learning perspective, many problems in NLP can be characterized as structured prediction: they involve predicting structurally rich and interdependent output variables. In spite of this, current neural NLP systems ignore the structural complexity of human language, relying on simplistic and error-prone greedy search procedures. This leads to serious mistakes in machine translation, such as words being dropped or named entities mistranslated. More broadly, neural networks are missing the key structural mechanisms for solving complex real-world tasks requiring deep reasoning.
This project attacks these fundamental problems by bringing together deep learning and structured prediction, with a highly disruptive and cross-disciplinary approach. First, I will endow neural networks with a "planning mechanism" to guide structural search, letting decoders learn the optimal order by which they should operate. This makes a bridge with reinforcement learning and combinatorial optimization. Second, I will develop new ways of automatically inducing latent structure inside the network, making it more expressive, scalable and interpretable. Synergies with probabilistic inference and sparse modeling techniques will be exploited. To complement these two innovations, I will investigate new ways of incorporating weak supervision to reduce the need for labeled data.
Three highly challenging applications will serve as testbeds: machine translation, quality estimation, and dependency parsing. To maximize technological impact, a collaboration is planned with a start-up company in the crowd-sourcing translation industry.

Max ERC Funding

1 436 000 €

Duration

Start date: 2018-02-01, End date: 2023-01-31

Project acronymDEPENDABLECLOUD

ProjectTowards the dependable cloud:
Building the foundations for tomorrow's dependable cloud computing

SummaryCloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.

Cloud computing is being increasingly adopted by individuals, organizations, and governments. However, as the computations that are offloaded to the cloud expand to societal-critical services, the dependability requirements of cloud services become much higher, and we need to ensure that the infrastructure that supports these services is ready to meet these requirements. In particular, this proposal tackles the challenges that arise from two distinctive characteristic of the cloud infrastructure.
The first is that non-crash faults, despite being considered highly unlikely by the designers of traditional systems, become commonplace at the scale and complexity of the cloud infrastructure. We argue that the current ad-hoc methods for handling these faults are insufficient, and that the only principled approach of assuming Byzantine faults is too pessimistic. Therefore, we call for a new systematic approach to tolerating non-crash, non-adversarial faults. This requires the definition of a new fault model, and the construction of a series of building blocks and key protocol elements that enable the construction of fault-tolerant cloud services.
The second issue is that to meet their scalability requirements, cloud services spread their state across multiple data centers, and direct users to the closest one. This raises the issue that not all operations can be executed optimistically, without being aware of concurrent operations over the same data, and thus multiple levels of consistency must coexist. However, this puts the onus of reasoning about which behaviors are allowed under such a hybrid consistency model on the programmer of the service. We propose a systematic solution to this problem, which includes a novel consistency model that allows for developing highly scalable services that are fast when possible and consistent when necessary, and a labeling methodology to guide the programmer in deciding which operations can run at each consistency level.

Max ERC Funding

1 076 084 €

Duration

Start date: 2012-10-01, End date: 2018-01-31

Project acronymDIAPASoN

ProjectDifferential Program Semantics

Researcher (PI)Ugo DAL LAGO

Host Institution (HI)ALMA MATER STUDIORUM - UNIVERSITA DI BOLOGNA

Call DetailsConsolidator Grant (CoG), PE6, ERC-2018-COG

SummaryTraditionally, program semantics is centered around the notion of program identity, that is to say of program equivalence: a program is identified with its meaning, and programs are considered as equal only if their meanings are the same. This view has been extremely fruitful in the past, allowing for a deep understanding of highly interactive forms of computation as embodied by higher-order or concurrent programs. The byproducts of all this lie everywhere in computer science, from programming language design to verification methodologies. The emphasis on equality — as opposed to differences — is not however in line with the way programs are written and structured in modern complex software systems. Subtasks are delegated to pieces of code which behave as expected only up to a certain probability of error, and only if the environment in which they operate makes this possible deviation irrelevant. These aspects have been almost neglected by the program semantics community until recently, and still have a marginal role. DIAPASON's goal is to study differences between programs as a constitutive and informative concept, rather than by way of relations between them. This will be accomplished by generalizing four major frameworks of program semantics, traditionally used for giving semantics to programs, comparing them, proving properties of them, and controlling their usage of resources: logical relations, bisimulation, game semantics, and linear logic.

Traditionally, program semantics is centered around the notion of program identity, that is to say of program equivalence: a program is identified with its meaning, and programs are considered as equal only if their meanings are the same. This view has been extremely fruitful in the past, allowing for a deep understanding of highly interactive forms of computation as embodied by higher-order or concurrent programs. The byproducts of all this lie everywhere in computer science, from programming language design to verification methodologies. The emphasis on equality — as opposed to differences — is not however in line with the way programs are written and structured in modern complex software systems. Subtasks are delegated to pieces of code which behave as expected only up to a certain probability of error, and only if the environment in which they operate makes this possible deviation irrelevant. These aspects have been almost neglected by the program semantics community until recently, and still have a marginal role. DIAPASON's goal is to study differences between programs as a constitutive and informative concept, rather than by way of relations between them. This will be accomplished by generalizing four major frameworks of program semantics, traditionally used for giving semantics to programs, comparing them, proving properties of them, and controlling their usage of resources: logical relations, bisimulation, game semantics, and linear logic.

Max ERC Funding

959 562 €

Duration

Start date: 2019-03-01, End date: 2024-02-29

Project acronymDMAP

ProjectData Mining Algorithms in Practice

Researcher (PI)Flavio Chierichetti

Host Institution (HI)UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA

Call DetailsStarting Grant (StG), PE6, ERC-2015-STG

SummaryData Mining algorithms are a cornerstone of today's Internet-related services and products. We aim to tackle some of the most important problems in Data Mining --- our goal is to develop a systematic theoretical understanding of certain simple algorithms that, in spite of being at the core of today's web industry, are not yet well understood in terms of their properties and performances, and to develop new simple algorithms for fundamental problems in this domain that have so far escaped a satisfactory solution.

Data Mining algorithms are a cornerstone of today's Internet-related services and products. We aim to tackle some of the most important problems in Data Mining --- our goal is to develop a systematic theoretical understanding of certain simple algorithms that, in spite of being at the core of today's web industry, are not yet well understood in terms of their properties and performances, and to develop new simple algorithms for fundamental problems in this domain that have so far escaped a satisfactory solution.

Max ERC Funding

1 137 500 €

Duration

Start date: 2016-02-01, End date: 2021-01-31

Project acronymDREAMS

ProjectDevelopment of a Research Environment for Advanced Modelling of Soft matter

Researcher (PI)Vincenzo Barone

Host Institution (HI)SCUOLA NORMALE SUPERIORE

Call DetailsAdvanced Grant (AdG), PE5, ERC-2012-ADG_20120216

Summary"DREAMS aims at developing an integrated theoretical-computational approach for the efficient description of linear and non-linear spectroscopies of several classes of organic probes, dispersed in polymeric matrices that range in complexity from simple polyolefins all the way to large biomolecules (proteins and polysaccharides).
In order to reach this objective, developments along the following lines are required: (i) elaboration of new theoretical models, to expand the scope of currently available treatments; (ii) definition of specific treatments for intermediate regions / regimes in the context of space- and time-multiscale descriptions; (iii) algorithmic implementation of the developed models / protocols in computational codes and, (iv) their efficient integration allowing for seamless flow of information and easy use by non-specialists.
A crucial asset for the success of the planned theoretical-computational developments is represented by an extensive network of solid collaborations with leading experimental groups, that will be involved in the synthesis and characterization of the different chromophore / matrix systems, as well as in the in-depth characterization of their spectroscopic responses. These interactions will thus allow for a stringent and exhaustive validation of the capabilities required of a general and versatile computational tool; at the same time, the experimental groups will make full use of advanced theoretical interpretations in the context of a real-world technological problem.
In summary, DREAMS relies on a carefully planned combination of theoretical developments, computational implementations, and interactions with experimentalists, in order to achieve a novel and cutting-edge result, namely to provide the scientific community with a set of computational tools that will make possible the simulation and prediction of response and spectroscopic properties of multi-component materials."

"DREAMS aims at developing an integrated theoretical-computational approach for the efficient description of linear and non-linear spectroscopies of several classes of organic probes, dispersed in polymeric matrices that range in complexity from simple polyolefins all the way to large biomolecules (proteins and polysaccharides).
In order to reach this objective, developments along the following lines are required: (i) elaboration of new theoretical models, to expand the scope of currently available treatments; (ii) definition of specific treatments for intermediate regions / regimes in the context of space- and time-multiscale descriptions; (iii) algorithmic implementation of the developed models / protocols in computational codes and, (iv) their efficient integration allowing for seamless flow of information and easy use by non-specialists.
A crucial asset for the success of the planned theoretical-computational developments is represented by an extensive network of solid collaborations with leading experimental groups, that will be involved in the synthesis and characterization of the different chromophore / matrix systems, as well as in the in-depth characterization of their spectroscopic responses. These interactions will thus allow for a stringent and exhaustive validation of the capabilities required of a general and versatile computational tool; at the same time, the experimental groups will make full use of advanced theoretical interpretations in the context of a real-world technological problem.
In summary, DREAMS relies on a carefully planned combination of theoretical developments, computational implementations, and interactions with experimentalists, in order to achieve a novel and cutting-edge result, namely to provide the scientific community with a set of computational tools that will make possible the simulation and prediction of response and spectroscopic properties of multi-component materials."

SummaryMolecular recognition plays a fundamental role in nearly all chemical and biological processes. The objective of this research project is to develop new methodology for studying and utilizing the noncovalent recognition between two molecular entities, focussing on biomolecular receptors and catalysts. A dynamic covalent capture strategy is proposed, characterized by the following strongholds. The target itself self-selects the best component out of a combinatorial library. The approach has a very high sensitivity, because molecular recognition occurs intramolecularly, and is very flexible, which allows for an easy implementation in very diverse research areas simply by changing the target. The dynamic covalent capture strategy is strongly embedded in the fields of supramolecular chemistry and (physical) organic chemistry. Nonetheless, the different work programmes strongly rely on the input from other areas, such as combinatorial chemistry, bioorganic chemistry, catalysis and computational chemistry, which renders the project highly interdisciplinary. Identified targets are new synthetic catalysts for the selective cleavage of biologically relevant compounds (D-Ala-D-Lac, cocaine and acetylcholine, and in a later stage peptides and DNA/RNA). Applicative work programmes are dedicated to the dynamic imprinting of monolayers on nanoparticles for multivalent recognition and cleavage of biologically relevant targets in vivo and to the development of new screening methodology for measuring chemical equilibria and, specifically, for the discovery of new HIV-1 fusion inhibitors.

Molecular recognition plays a fundamental role in nearly all chemical and biological processes. The objective of this research project is to develop new methodology for studying and utilizing the noncovalent recognition between two molecular entities, focussing on biomolecular receptors and catalysts. A dynamic covalent capture strategy is proposed, characterized by the following strongholds. The target itself self-selects the best component out of a combinatorial library. The approach has a very high sensitivity, because molecular recognition occurs intramolecularly, and is very flexible, which allows for an easy implementation in very diverse research areas simply by changing the target. The dynamic covalent capture strategy is strongly embedded in the fields of supramolecular chemistry and (physical) organic chemistry. Nonetheless, the different work programmes strongly rely on the input from other areas, such as combinatorial chemistry, bioorganic chemistry, catalysis and computational chemistry, which renders the project highly interdisciplinary. Identified targets are new synthetic catalysts for the selective cleavage of biologically relevant compounds (D-Ala-D-Lac, cocaine and acetylcholine, and in a later stage peptides and DNA/RNA). Applicative work programmes are dedicated to the dynamic imprinting of monolayers on nanoparticles for multivalent recognition and cleavage of biologically relevant targets in vivo and to the development of new screening methodology for measuring chemical equilibria and, specifically, for the discovery of new HIV-1 fusion inhibitors.