SummaryI intend to investigate what cognitive mechanisms give us combinatorial speech. Combinatorial speech is the ability to make new words using pre-existing speech sounds. Humans are the only apes that can do this, yet we do not know how our brains do it, nor how exactly we differ from other apes. Using new experimental techniques to study human behavior and new computational techniques to model human cognition, I will find out how we deal with combinatorial speech.
The experimental part will study individual and cultural learning. Experimental cultural learning is a new technique that simulates cultural evolution in the laboratory. Two types of cultural learning will be used: iterated learning, which simulates language transfer across generations, and social coordination, which simulates emergence of norms in a language community. Using the two types of cultural learning together with individual learning experiments will help to zero in, from three angles, on how humans deal with combinatorial speech. In addition it will make a methodological contribution by comparing the strengths and weaknesses of the three methods.
The computer modeling part will formalize hypotheses about how our brains deal with combinatorial speech. Two models will be built: a high-level model that will establish the basic algorithms with which combinatorial speech is learned and reproduced, and a neural model that will establish in more detail how the algorithms are implemented in the brain. In addition, the models, through increasing understanding of how humans deal with speech, will help bridge the performance gap between human and computer speech recognition.
The project will advance science in four ways: it will provide insight into how our unique ability for using combinatorial speech works, it will tell us how this is implemented in the brain, it will extend the novel methodology of experimental cultural learning and it will create new computer models for dealing with human speech.

I intend to investigate what cognitive mechanisms give us combinatorial speech. Combinatorial speech is the ability to make new words using pre-existing speech sounds. Humans are the only apes that can do this, yet we do not know how our brains do it, nor how exactly we differ from other apes. Using new experimental techniques to study human behavior and new computational techniques to model human cognition, I will find out how we deal with combinatorial speech.
The experimental part will study individual and cultural learning. Experimental cultural learning is a new technique that simulates cultural evolution in the laboratory. Two types of cultural learning will be used: iterated learning, which simulates language transfer across generations, and social coordination, which simulates emergence of norms in a language community. Using the two types of cultural learning together with individual learning experiments will help to zero in, from three angles, on how humans deal with combinatorial speech. In addition it will make a methodological contribution by comparing the strengths and weaknesses of the three methods.
The computer modeling part will formalize hypotheses about how our brains deal with combinatorial speech. Two models will be built: a high-level model that will establish the basic algorithms with which combinatorial speech is learned and reproduced, and a neural model that will establish in more detail how the algorithms are implemented in the brain. In addition, the models, through increasing understanding of how humans deal with speech, will help bridge the performance gap between human and computer speech recognition.
The project will advance science in four ways: it will provide insight into how our unique ability for using combinatorial speech works, it will tell us how this is implemented in the brain, it will extend the novel methodology of experimental cultural learning and it will create new computer models for dealing with human speech.

Max ERC Funding

1 276 620 €

Duration

Start date: 2012-02-01, End date: 2017-01-31

Project acronymADAM

ProjectThe Adaptive Auditory Mind

Researcher (PI)Shihab Shamma

Host Institution (HI)ECOLE NORMALE SUPERIEURE

Call DetailsAdvanced Grant (AdG), SH4, ERC-2011-ADG_20110406

SummaryListening in realistic situations is an active process that engages perceptual and cognitive faculties, endowing speech with meaning, music with joy, and environmental sounds with emotion. Through hearing, humans and other animals navigate complex acoustic scenes, separate sound mixtures, and assess their behavioral relevance. These remarkable feats are currently beyond our understanding and exceed the capabilities of the most sophisticated audio engineering systems. The goal of the proposed research is to investigate experimentally a novel view of hearing, where active hearing emerges from a deep interplay between adaptive sensory processes and goal-directed cognition. Specifically, we shall explore the postulate that versatile perception is mediated by rapid-plasticity at the neuronal level. At the conjunction of sensory and cognitive processing, rapid-plasticity pervades all levels of auditory system, from the cochlea up to the auditory and prefrontal cortices. Exploiting fundamental statistical regularities of acoustics, it is what allows humans and other animal to deal so successfully with natural acoustic scenes where artificial systems fail. The project builds on the internationally recognized leadership of the PI in the fields of physiology and computational modeling, combined with the expertise of the Co-Investigator in psychophysics. Building on these highly complementary fields and several technical innovations, we hope to promote a novel view of auditory perception and cognition. We aim also to contribute significantly to translational research in the domain of signal processing for clinical hearing aids, given that many current limitations are not technological but rather conceptual. The project will finally result in the creation of laboratory facilities and an intellectual network unique in France and rare in all of Europe, combining cognitive, neural, and computational approaches to auditory neuroscience.

Listening in realistic situations is an active process that engages perceptual and cognitive faculties, endowing speech with meaning, music with joy, and environmental sounds with emotion. Through hearing, humans and other animals navigate complex acoustic scenes, separate sound mixtures, and assess their behavioral relevance. These remarkable feats are currently beyond our understanding and exceed the capabilities of the most sophisticated audio engineering systems. The goal of the proposed research is to investigate experimentally a novel view of hearing, where active hearing emerges from a deep interplay between adaptive sensory processes and goal-directed cognition. Specifically, we shall explore the postulate that versatile perception is mediated by rapid-plasticity at the neuronal level. At the conjunction of sensory and cognitive processing, rapid-plasticity pervades all levels of auditory system, from the cochlea up to the auditory and prefrontal cortices. Exploiting fundamental statistical regularities of acoustics, it is what allows humans and other animal to deal so successfully with natural acoustic scenes where artificial systems fail. The project builds on the internationally recognized leadership of the PI in the fields of physiology and computational modeling, combined with the expertise of the Co-Investigator in psychophysics. Building on these highly complementary fields and several technical innovations, we hope to promote a novel view of auditory perception and cognition. We aim also to contribute significantly to translational research in the domain of signal processing for clinical hearing aids, given that many current limitations are not technological but rather conceptual. The project will finally result in the creation of laboratory facilities and an intellectual network unique in France and rare in all of Europe, combining cognitive, neural, and computational approaches to auditory neuroscience.

Max ERC Funding

3 199 078 €

Duration

Start date: 2012-10-01, End date: 2018-09-30

Project acronymADDICTION

ProjectBeyond the Genetics of Addiction

Researcher (PI)Jacqueline Mignon Vink

Host Institution (HI)STICHTING KATHOLIEKE UNIVERSITEIT

Call DetailsStarting Grant (StG), SH4, ERC-2011-StG_20101124

SummaryMy proposal seeks to explain the complex interplay between genetic and environmental causes of individual variation in substance use and the risk for abuse. Substance use is common. Substances like nicotine and cannabis have well-known negative health consequences, while alcohol and caffeine use may be both beneficial and detrimental, depending on quantity and frequency of use. Twin studies (including my own) demonstrated that both heritable and environmental factors play a role.
My proposal on substance use (nicotine, alcohol, cannabis and caffeine) is organized around several key objectives: 1. To unravel the complex contribution of genetic and environmental factors to substance use by using extended twin family designs; 2. To identify and confirm genes and gene networks involved in substance use by using DNA-variant data; 3. To explore gene expression patterns with RNA data in substance users versus non-users; 4. To investigate biomarkers in substance users versus non-users using blood or urine; 5. To unravel relation between substance use and health by linking twin-family data to national medical databases.
To realize these aims I will use the extensive resources of the Netherlands Twin Register (NTR); including both the longitudinal phenotype database and the biological samples. I have been involved in data collection, coordination of data collection and analyzing NTR data since 1999. With my comprehensive experience in data collection, data analyses and my knowledge in the field of behavior genetics and addiction research I will be able to successfully lead this cutting-edge project. Additional data crucial for the project will be collected by my team. Large samples will be available for this study and state-of-the art methods will be used to analyze the data. All together, my project will offer powerful approaches to unravel the complex interaction between genetic and environmental causes of individual differences in substance use and the risk for abuse.

My proposal seeks to explain the complex interplay between genetic and environmental causes of individual variation in substance use and the risk for abuse. Substance use is common. Substances like nicotine and cannabis have well-known negative health consequences, while alcohol and caffeine use may be both beneficial and detrimental, depending on quantity and frequency of use. Twin studies (including my own) demonstrated that both heritable and environmental factors play a role.
My proposal on substance use (nicotine, alcohol, cannabis and caffeine) is organized around several key objectives: 1. To unravel the complex contribution of genetic and environmental factors to substance use by using extended twin family designs; 2. To identify and confirm genes and gene networks involved in substance use by using DNA-variant data; 3. To explore gene expression patterns with RNA data in substance users versus non-users; 4. To investigate biomarkers in substance users versus non-users using blood or urine; 5. To unravel relation between substance use and health by linking twin-family data to national medical databases.
To realize these aims I will use the extensive resources of the Netherlands Twin Register (NTR); including both the longitudinal phenotype database and the biological samples. I have been involved in data collection, coordination of data collection and analyzing NTR data since 1999. With my comprehensive experience in data collection, data analyses and my knowledge in the field of behavior genetics and addiction research I will be able to successfully lead this cutting-edge project. Additional data crucial for the project will be collected by my team. Large samples will be available for this study and state-of-the art methods will be used to analyze the data. All together, my project will offer powerful approaches to unravel the complex interaction between genetic and environmental causes of individual differences in substance use and the risk for abuse.

Max ERC Funding

1 491 964 €

Duration

Start date: 2011-12-01, End date: 2017-05-31

Project acronymALGILE

ProjectFoundations of Algebraic and Dynamic Data Management Systems

Researcher (PI)Christoph Koch

Host Institution (HI)ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE

Call DetailsStarting Grant (StG), PE6, ERC-2011-StG_20101014

Summary"Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."

"Contemporary database query languages are ultimately founded on logic and feature an additive operation – usually a form of (multi)set union or disjunction – that is asymmetric in that additions or updates do not always have an inverse. This asymmetry puts a greater part of the machinery of abstract algebra for equation solving outside the reach of databases. However, such equation solving would be a key functionality that problems such as query equivalence testing and data integration could be reduced to: In the current scenario of the presence of an asymmetric additive operation they are undecidable. Moreover, query languages with a symmetric additive operation (i.e., which has an inverse and is thus based on ring theory) would open up databases for a large range of new scientific and mathematical applications.
The goal of the proposed project is to reinvent database management systems with a foundation in abstract algebra and specifically in ring theory. The presence of an additive inverse allows to cleanly define differences between queries. This gives rise to a database analog of differential calculus that leads to radically new incremental and adaptive query evaluation algorithms that substantially outperform the state of the art techniques. These algorithms enable a new class of systems which I call Dynamic Data Management Systems. Such systems can maintain continuously fresh query views at extremely high update rates and have important applications in interactive Large-scale Data Analysis. There is a natural connection between differences and updates, motivating the group theoretic study of updates that will lead to better ways of creating out-of-core data processing algorithms for new storage devices. Basing queries on ring theory leads to a new class of systems, Algebraic Data Management Systems, which herald a convergence of database systems and computer algebra systems."

SummaryComputer animation has traditionally been associated with applications in virtual-reality-based training, video games or feature films. However, interactive animation is gaining relevance in a more general scope, as a tool for early-stage analysis, design and planning in many applications in science and engineering. The user can get quick and visual feedback of the results, and then proceed by refining the experiments or designs. Potential applications include nanodesign, e-commerce or tactile telecommunication, but they also reach as far as, e.g., the analysis of ecological, climate, biological or physiological processes.
The application of computer animation is extremely limited in comparison to its potential outreach due to a trade-off between accuracy and computational efficiency. Such trade-off is induced by inherent complexity sources such as nonlinear or anisotropic behaviors, heterogeneous properties, or high dynamic ranges of effects.
The Animetrics project proposes a modeling and animation methodology, which consists of a multi-scale decomposition of complex processes, the description of the process at each scale through combination of simple local models, and fitting the parameters of those local models using large amounts of data from example effects. The modeling and animation methodology will be explored on specific problems arising in complex mechanical phenomena, including viscoelasticity of solids and thin shells, multi-body contact, granular and liquid flow, and fracture of solids.

Computer animation has traditionally been associated with applications in virtual-reality-based training, video games or feature films. However, interactive animation is gaining relevance in a more general scope, as a tool for early-stage analysis, design and planning in many applications in science and engineering. The user can get quick and visual feedback of the results, and then proceed by refining the experiments or designs. Potential applications include nanodesign, e-commerce or tactile telecommunication, but they also reach as far as, e.g., the analysis of ecological, climate, biological or physiological processes.
The application of computer animation is extremely limited in comparison to its potential outreach due to a trade-off between accuracy and computational efficiency. Such trade-off is induced by inherent complexity sources such as nonlinear or anisotropic behaviors, heterogeneous properties, or high dynamic ranges of effects.
The Animetrics project proposes a modeling and animation methodology, which consists of a multi-scale decomposition of complex processes, the description of the process at each scale through combination of simple local models, and fitting the parameters of those local models using large amounts of data from example effects. The modeling and animation methodology will be explored on specific problems arising in complex mechanical phenomena, including viscoelasticity of solids and thin shells, multi-body contact, granular and liquid flow, and fracture of solids.

Summary"During the past twenty years, we have witnessed profound technological changes, summarised under the terms of digital revolution or entering the information age. It is evident that these technological changes will have a deep societal impact, and questions of privacy and security are primordial to ensure the survival of a free and open society.
Cryptology is a main building block of any security solution, and at the heart of projects such as electronic identity and health cards, access control, digital content distribution or electronic voting, to mention only a few important applications. During the past decades, public-key cryptology has established itself as a research topic in computer science; tools of theoretical computer science are employed to “prove” the security of cryptographic primitives such as encryption or digital signatures and of more complex protocols. It is often forgotten, however, that all practically relevant public-key cryptosystems are rooted in pure mathematics, in particular, number theory and arithmetic geometry. In fact, the socalled security “proofs” are all conditional to the algorithmic untractability of certain number theoretic problems, such as factorisation of large integers or discrete logarithms in algebraic curves. Unfortunately, there is a large cultural gap between computer scientists using a black-box security reduction to a supposedly hard problem in algorithmic number theory and number theorists, who are often interested in solving small and easy instances of the same problem. The theoretical grounds on which current algorithmic number theory operates are actually rather shaky, and cryptologists are generally unaware of this fact.
The central goal of ANTICS is to rebuild algorithmic number theory on the firm grounds of theoretical computer science."

"During the past twenty years, we have witnessed profound technological changes, summarised under the terms of digital revolution or entering the information age. It is evident that these technological changes will have a deep societal impact, and questions of privacy and security are primordial to ensure the survival of a free and open society.
Cryptology is a main building block of any security solution, and at the heart of projects such as electronic identity and health cards, access control, digital content distribution or electronic voting, to mention only a few important applications. During the past decades, public-key cryptology has established itself as a research topic in computer science; tools of theoretical computer science are employed to “prove” the security of cryptographic primitives such as encryption or digital signatures and of more complex protocols. It is often forgotten, however, that all practically relevant public-key cryptosystems are rooted in pure mathematics, in particular, number theory and arithmetic geometry. In fact, the socalled security “proofs” are all conditional to the algorithmic untractability of certain number theoretic problems, such as factorisation of large integers or discrete logarithms in algebraic curves. Unfortunately, there is a large cultural gap between computer scientists using a black-box security reduction to a supposedly hard problem in algorithmic number theory and number theorists, who are often interested in solving small and easy instances of the same problem. The theoretical grounds on which current algorithmic number theory operates are actually rather shaky, and cryptologists are generally unaware of this fact.
The central goal of ANTICS is to rebuild algorithmic number theory on the firm grounds of theoretical computer science."

Max ERC Funding

1 453 507 €

Duration

Start date: 2012-01-01, End date: 2016-12-31

Project acronymASAP

ProjectAdaptive Security and Privacy

Researcher (PI)Bashar Nuseibeh

Host Institution (HI)THE OPEN UNIVERSITY

Call DetailsAdvanced Grant (AdG), PE6, ERC-2011-ADG_20110209

SummaryWith the prevalence of mobile computing devices and the increasing availability of pervasive services, ubiquitous computing (Ubicomp) is a reality for many people. This reality is generating opportunities for people to interact socially in new and richer ways, and to work more effectively in a variety of new environments. More generally, Ubicomp infrastructures – controlled by software – will determine users’ access to critical services.
With these opportunities come higher risks of misuse by malicious agents. Therefore, the role and design of software for managing use and protecting against misuse is critical, and the engineering of software that is both functionally effective while safe guarding user assets from harm is a key challenge. Indeed the very nature of Ubicomp means that software must adapt to the changing needs of users and their environment, and, more critically, to the different threats to users’ security and privacy.
ASAP proposes to radically re-conceptualise software engineering for Ubicomp in ways that are cognisant of the changing functional needs of users, of the changing threats to user assets, and of the changing relationships between them. We propose to deliver adaptive software capabilities for supporting users in managing their privacy requirements, and adaptive software capabilities to deliver secure software that underpin those requirements. A key novelty of our approach is its holistic treatment of security and human behaviour. To achieve this, it draws upon contributions from requirements engineering, security & privacy engineering, and human-computer interaction. Our aim is to contribute to software engineering that empowers and protects Ubicomp users. Underpinning our approach will be the development of representations of security and privacy problem structures that capture user requirements, the context in which those requirements arise, and the adaptive software that aims to meet those requirements.

With the prevalence of mobile computing devices and the increasing availability of pervasive services, ubiquitous computing (Ubicomp) is a reality for many people. This reality is generating opportunities for people to interact socially in new and richer ways, and to work more effectively in a variety of new environments. More generally, Ubicomp infrastructures – controlled by software – will determine users’ access to critical services.
With these opportunities come higher risks of misuse by malicious agents. Therefore, the role and design of software for managing use and protecting against misuse is critical, and the engineering of software that is both functionally effective while safe guarding user assets from harm is a key challenge. Indeed the very nature of Ubicomp means that software must adapt to the changing needs of users and their environment, and, more critically, to the different threats to users’ security and privacy.
ASAP proposes to radically re-conceptualise software engineering for Ubicomp in ways that are cognisant of the changing functional needs of users, of the changing threats to user assets, and of the changing relationships between them. We propose to deliver adaptive software capabilities for supporting users in managing their privacy requirements, and adaptive software capabilities to deliver secure software that underpin those requirements. A key novelty of our approach is its holistic treatment of security and human behaviour. To achieve this, it draws upon contributions from requirements engineering, security & privacy engineering, and human-computer interaction. Our aim is to contribute to software engineering that empowers and protects Ubicomp users. Underpinning our approach will be the development of representations of security and privacy problem structures that capture user requirements, the context in which those requirements arise, and the adaptive software that aims to meet those requirements.

SummaryThe property of spin has been harnessed in an array of revolutionary technologies, from nuclear spins in magnetic resonance imaging to spintronics in magnetic recording media. Nature at its deepest level is quantum mechanical and spins are capable of demonstrating superposition and entanglement, yet such coherent properties have not yet been fully exploited. The exquisite control over materials fabrication and spin control techniques has reached a maturity where spintronics can go beyond purely classical effects and begin to fully exploit these quantum properties. Potential applications range from quantum information processors, including the transmission of quantum information via itinerant electron spins, single microwave photon storage within spin ensembles, and a new generation of sensors exploiting entanglement to yield fundamentally enhanced precision.
The aim of ASCENT is to develop materials and devices in which electron and nuclear spins exhibit long-lived coherent quantum behaviour and interactions which can be harnessed for technological purposes. Specifically, ASCENT will exploit in range of condensed matter systems from molecular materials to silicon-based structures, the possibility of transiently generating and removing electron spins in the vicinity of nuclear spins. The project represents a new and promising direction for the development of coherent interactions between spins in materials, and one which builds upon foundations I have established in my earlier work, often supported by preliminary investigations. Strong interactions with theory throughout this project will provide insights to refine and improve the experiments. In addition to direct applications in quantum technologies, the insights and methodology gained will be fed back into the wider field of spin resonance, including dynamic nuclear polarisation, structural biology and medical imaging.

The property of spin has been harnessed in an array of revolutionary technologies, from nuclear spins in magnetic resonance imaging to spintronics in magnetic recording media. Nature at its deepest level is quantum mechanical and spins are capable of demonstrating superposition and entanglement, yet such coherent properties have not yet been fully exploited. The exquisite control over materials fabrication and spin control techniques has reached a maturity where spintronics can go beyond purely classical effects and begin to fully exploit these quantum properties. Potential applications range from quantum information processors, including the transmission of quantum information via itinerant electron spins, single microwave photon storage within spin ensembles, and a new generation of sensors exploiting entanglement to yield fundamentally enhanced precision.
The aim of ASCENT is to develop materials and devices in which electron and nuclear spins exhibit long-lived coherent quantum behaviour and interactions which can be harnessed for technological purposes. Specifically, ASCENT will exploit in range of condensed matter systems from molecular materials to silicon-based structures, the possibility of transiently generating and removing electron spins in the vicinity of nuclear spins. The project represents a new and promising direction for the development of coherent interactions between spins in materials, and one which builds upon foundations I have established in my earlier work, often supported by preliminary investigations. Strong interactions with theory throughout this project will provide insights to refine and improve the experiments. In addition to direct applications in quantum technologies, the insights and methodology gained will be fed back into the wider field of spin resonance, including dynamic nuclear polarisation, structural biology and medical imaging.

Max ERC Funding

1 875 550 €

Duration

Start date: 2011-12-01, End date: 2017-06-30

Project acronymBAYES OR BUST!

ProjectBayes or Bust: Sensible Hypothesis Tests for Social Scientists

Researcher (PI)Eric-Jan Wagenmakers

Host Institution (HI)UNIVERSITEIT VAN AMSTERDAM

Call DetailsStarting Grant (StG), SH4, ERC-2011-StG_20101124

SummaryThe goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.

The goal of this proposal is to develop and promote Bayesian hypothesis tests for social scientists. By and large, social scientists have ignored the Bayesian revolution in statistics, and, consequently, most social scientists still assess the veracity of experimental effects using the same methodology that was used by their advisors and the advisors before them. This state of affairs is undesirable: social scientists conduct groundbreaking, innovative research only to analyze their results using methods that are old-fashioned or even inappropriate. This imbalance between the science and the statistics has gradually increased the pressure on the field to change the way inferences are drawn from their data. However, three requirements need to be fulfilled before social scientists are ready to adopt Bayesian tests of hypotheses. First, the Bayesian tests need to be developed for problems that social scientists work with on a regular basis; second, the Bayesian tests need to be default or objective; and, third, the Bayesian tests need to be available in a user-friendly computer program. This proposal seeks to make major progress on all three fronts.
Concretely, the projects in this proposal build on recent developments in the field of statistics and use the default Jeffreys-Zellner-Siow priors to compute Bayesian hypothesis tests for regression, correlation, the t-test, and different versions of analysis of variance (ANOVA). A similar approach will be used to develop Bayesian hypothesis tests for logistic regression and the analysis of contingency tables, as well as for popular latent process methods such as factor analysis and structural equation modeling. We aim to implement the various tests in a new computer program, Bayes-SPSS, with a similar look and feel as the frequentist spreadsheet program SPSS (i.e., Statistical Package for the Social Sciences). Together, these projects may help revolutionize the way social scientists analyze their data.

SummaryLearning to read is probably one of the most exciting discoveries in our life. Using a longitudinal approach, the research proposed examines how the human brain responds to two major challenges: (a) the instantiation a complex cognitive function for which there is no genetic blueprint (learning to read in a first language, L1), and (b) the accommodation to new statistical regularities when learning to read in a second language (L2). The aim of the present research project is to identify the neural substrates of the reading process and its constituent cognitive components, with specific attention to individual differences and reading disabilities; as well as to investigate the relationship between specific cognitive functions and the changes in neural activity that take place in the course of learning to read in L1 and in L2. The project will employ a longitudinal design. We will recruit children before they learn to read in L1 and in L2 and track reading development with both cognitive and neuroimaging measures over 24 months. The findings from this project will provide a deeper understanding of (a) how general neurocognitive factors and language specific factors underlie individual differences – and reading disabilities– in reading acquisition in L1 and in L2; (b) how the neuro-cognitive circuitry changes and brain mechanisms synchronize while instantiating reading in L1 and in L2; (c) what the limitations and the extent of brain plasticity are in young readers. An interdisciplinary and multi-methodological approach is one of the keys to success of the present project, along with strong theory-driven investigation. By combining both we will generate breakthroughs to advance our understanding of how literacy in L1 and in L2 is acquired and mastered. The research proposed will also lay the foundations for more applied investigations of best practice in teaching reading in first and subsequent languages, and devising intervention methods for reading disabilities.

Learning to read is probably one of the most exciting discoveries in our life. Using a longitudinal approach, the research proposed examines how the human brain responds to two major challenges: (a) the instantiation a complex cognitive function for which there is no genetic blueprint (learning to read in a first language, L1), and (b) the accommodation to new statistical regularities when learning to read in a second language (L2). The aim of the present research project is to identify the neural substrates of the reading process and its constituent cognitive components, with specific attention to individual differences and reading disabilities; as well as to investigate the relationship between specific cognitive functions and the changes in neural activity that take place in the course of learning to read in L1 and in L2. The project will employ a longitudinal design. We will recruit children before they learn to read in L1 and in L2 and track reading development with both cognitive and neuroimaging measures over 24 months. The findings from this project will provide a deeper understanding of (a) how general neurocognitive factors and language specific factors underlie individual differences – and reading disabilities– in reading acquisition in L1 and in L2; (b) how the neuro-cognitive circuitry changes and brain mechanisms synchronize while instantiating reading in L1 and in L2; (c) what the limitations and the extent of brain plasticity are in young readers. An interdisciplinary and multi-methodological approach is one of the keys to success of the present project, along with strong theory-driven investigation. By combining both we will generate breakthroughs to advance our understanding of how literacy in L1 and in L2 is acquired and mastered. The research proposed will also lay the foundations for more applied investigations of best practice in teaching reading in first and subsequent languages, and devising intervention methods for reading disabilities.