The last 25 years have seen a small revolution in our approach to the understanding of new technology and information systems. It has become a founding assumption of computer-supported cooperative ...
More

The last 25 years have seen a small revolution in our approach to the understanding of new technology and information systems. It has become a founding assumption of computer-supported cooperative work and human–computer interaction that in the future, if not already, most computer applications will be socially embedded in the sense that they will become infrastructures (in some sense) for the development of the social practices which they are designed to support. Assuming that IT artifacts have to be understood in this sociotechnical way, traditional criteria for good design in computer science, such as performance, reliability, stability or usability, arguably need to be supplemented by methods and perspectives which illuminate the way in which technology and social practice are mutually elaborating. This book concerns the philosophy, conceptual apparatus, and methodological concerns which will inform the development of a systematic and long-term human-centered approach to the IT-product life cycle, addressing issues concerned with appropriation and infrastructuring. This entails an orientation to “practice-based computing.” The book contains a number of chapters which examine both the conceptual foundations of such an approach, and a number of empirical case studies that exemplify it.Less

Socio-Informatics

Published in print: 2018-03-08

The last 25 years have seen a small revolution in our approach to the understanding of new technology and information systems. It has become a founding assumption of computer-supported cooperative work and human–computer interaction that in the future, if not already, most computer applications will be socially embedded in the sense that they will become infrastructures (in some sense) for the development of the social practices which they are designed to support. Assuming that IT artifacts have to be understood in this sociotechnical way, traditional criteria for good design in computer science, such as performance, reliability, stability or usability, arguably need to be supplemented by methods and perspectives which illuminate the way in which technology and social practice are mutually elaborating. This book concerns the philosophy, conceptual apparatus, and methodological concerns which will inform the development of a systematic and long-term human-centered approach to the IT-product life cycle, addressing issues concerned with appropriation and infrastructuring. This entails an orientation to “practice-based computing.” The book contains a number of chapters which examine both the conceptual foundations of such an approach, and a number of empirical case studies that exemplify it.

Today, Games User Research forms an integral component of the development of any kind of interactive entertainment. User research stands as the primary source of business intelligence in the ...
More

Today, Games User Research forms an integral component of the development of any kind of interactive entertainment. User research stands as the primary source of business intelligence in the incredibly competitive game industry. This book aims to provide the foundational, accessible, go-to resource for people interested in GUR. It is a community-driven effort—it is written by passionate professionals and researchers in the GUR community as a handbook and guide for everyone interested in user research and games. The book bridges the current gaps of knowledge in Game User Research, building the go-to volume for everyone working with games, with an emphasis on those new to the field.Less

Games User Research

Published in print: 2018-01-25

Today, Games User Research forms an integral component of the development of any kind of interactive entertainment. User research stands as the primary source of business intelligence in the incredibly competitive game industry. This book aims to provide the foundational, accessible, go-to resource for people interested in GUR. It is a community-driven effort—it is written by passionate professionals and researchers in the GUR community as a handbook and guide for everyone interested in user research and games. The book bridges the current gaps of knowledge in Game User Research, building the go-to volume for everyone working with games, with an emphasis on those new to the field.

This book presents computational interaction as an approach to explaining and enhancing the interaction between humans and information technology. Computational interaction applies abstraction, ...
More

This book presents computational interaction as an approach to explaining and enhancing the interaction between humans and information technology. Computational interaction applies abstraction, automation, and analysis to inform our understanding of the structure of interaction and also to inform the design of the software that drives new and exciting human-computer interfaces. The methods of computational interaction allow, for example, designers to identify user interfaces that are optimal against some objective criteria. They also allow software engineers to build interactive systems that adapt their behaviour to better suit individual capacities and preferences. Embedded in an iterative design process, computational interaction has the potential to complement human strengths and provide methods for generating inspiring and elegant designs. Computational interaction does not exclude the messy and complicated behaviour of humans, rather it embraces it by, for example, using models that are sensitive to uncertainty and that capture subtle variations between individual users. It also promotes the idea that there are many aspects of interaction that can be augmented by algorithms. This book introduces computational interaction design to the reader by exploring a wide range of computational interaction techniques, strategies and methods. It explains how techniques such as optimisation, economic modelling, machine learning, control theory, formal methods, cognitive models and statistical language processing can be used to model interaction and design more expressive, efficient and versatile interaction.Less

Computational Interaction

Published in print: 2018-01-18

This book presents computational interaction as an approach to explaining and enhancing the interaction between humans and information technology. Computational interaction applies abstraction, automation, and analysis to inform our understanding of the structure of interaction and also to inform the design of the software that drives new and exciting human-computer interfaces. The methods of computational interaction allow, for example, designers to identify user interfaces that are optimal against some objective criteria. They also allow software engineers to build interactive systems that adapt their behaviour to better suit individual capacities and preferences. Embedded in an iterative design process, computational interaction has the potential to complement human strengths and provide methods for generating inspiring and elegant designs. Computational interaction does not exclude the messy and complicated behaviour of humans, rather it embraces it by, for example, using models that are sensitive to uncertainty and that capture subtle variations between individual users. It also promotes the idea that there are many aspects of interaction that can be augmented by algorithms. This book introduces computational interaction design to the reader by exploring a wide range of computational interaction techniques, strategies and methods. It explains how techniques such as optimisation, economic modelling, machine learning, control theory, formal methods, cognitive models and statistical language processing can be used to model interaction and design more expressive, efficient and versatile interaction.

This book is devoted to a general study of geometric theories from a topos-theoretic perspective. After recalling the necessary topos-theoretic preliminaries, it presents the main methodology it uses ...
More

This book is devoted to a general study of geometric theories from a topos-theoretic perspective. After recalling the necessary topos-theoretic preliminaries, it presents the main methodology it uses to extract ‘concrete’ information on theories from properties of their classifying toposes—the ‘bridge’ technique. As a first implementation of this methodology, a duality is established between the subtoposes of the classifying topos of a geometric theory and the geometric theory extensions (also called ‘quotients’) of the theory. Many concepts of elementary topos theory which apply to the lattice of subtoposes of a given topos are then transferred via this duality into the context of geometric theories. A second very general implementation of the ‘bridge’ technique is the investigation of the class of theories of presheaf type (i.e. classified by a presheaf topos). After establishing a number of preliminary results on flat functors in relation to classifying toposes, the book carries out a systematic investigation of this class resulting in a number of general results and a characterization theorem allowing one to test whether a given theory is of presheaf type by considering its models in arbitrary Grothendieck toposes. Expansions of geometric theories and faithful interpretations of theories of presheaf type are also investigated. As geometric theories can always be written (in many ways) as quotients of presheaf type theories, the study of quotients of a given theory of presheaf type is undertaken. Lastly, the book presents a number of applications in different fields of mathematics of the theory it develops.Less

Olivia Caramello

Published in print: 2017-12-21

This book is devoted to a general study of geometric theories from a topos-theoretic perspective. After recalling the necessary topos-theoretic preliminaries, it presents the main methodology it uses to extract ‘concrete’ information on theories from properties of their classifying toposes—the ‘bridge’ technique. As a first implementation of this methodology, a duality is established between the subtoposes of the classifying topos of a geometric theory and the geometric theory extensions (also called ‘quotients’) of the theory. Many concepts of elementary topos theory which apply to the lattice of subtoposes of a given topos are then transferred via this duality into the context of geometric theories. A second very general implementation of the ‘bridge’ technique is the investigation of the class of theories of presheaf type (i.e. classified by a presheaf topos). After establishing a number of preliminary results on flat functors in relation to classifying toposes, the book carries out a systematic investigation of this class resulting in a number of general results and a characterization theorem allowing one to test whether a given theory is of presheaf type by considering its models in arbitrary Grothendieck toposes. Expansions of geometric theories and faithful interpretations of theories of presheaf type are also investigated. As geometric theories can always be written (in many ways) as quotients of presheaf type theories, the study of quotients of a given theory of presheaf type is undertaken. Lastly, the book presents a number of applications in different fields of mathematics of the theory it develops.

Optimal spacecraft trajectories are given a modern comprehensive treatment of the theory and important results. In most cases “optimal” means minimum propellant. Less propellant required results in ...
More

Optimal spacecraft trajectories are given a modern comprehensive treatment of the theory and important results. In most cases “optimal” means minimum propellant. Less propellant required results in more payload delivered to the destination. Both necessary and sufficient conditions for an optimal solution are analysed. Numerous illustrative examples are included and problems are provided at the ends of the chapters along with references. Newer topics such as cooperative rendezvous and second-order conditions are considered. Seven appendices are included to supplement the text, some with problems. Both classical results and newer research results are included. A new test for a conjugate point is demonstrated. The book is both a graduate-level textbook and a scholarly reference book.Less

Optimal Spacecraft Trajectories

John E. Prussing

Published in print: 2017-12-21

Optimal spacecraft trajectories are given a modern comprehensive treatment of the theory and important results. In most cases “optimal” means minimum propellant. Less propellant required results in more payload delivered to the destination. Both necessary and sufficient conditions for an optimal solution are analysed. Numerous illustrative examples are included and problems are provided at the ends of the chapters along with references. Newer topics such as cooperative rendezvous and second-order conditions are considered. Seven appendices are included to supplement the text, some with problems. Both classical results and newer research results are included. A new test for a conjugate point is demonstrated. The book is both a graduate-level textbook and a scholarly reference book.

This book contains complete transcriptions, with notes, of the 133 surviving letters of Charles Hutton (1737–1823). The letters span the period 1770–1823 and are drawn from nearly thirty different ...
More

This book contains complete transcriptions, with notes, of the 133 surviving letters of Charles Hutton (1737–1823). The letters span the period 1770–1823 and are drawn from nearly thirty different archives. Most have not been published before. Hutton was one of the most prominent British mathematicians of his generation. He played roles at the Royal Society, the Royal Military Academy, the Board of Longitude, the ‘philomath’ network, and elsewhere. He worked on the explosive force of gunpowder and the mean density of the earth, winning the Royal Society’s Copley Medal in 1778; he was also at the focus of a celebrated row at the Royal Society in 1784 over the place of mathematics there. He is of particular historical interest because of the variety of roles he played in British mathematics, the dexterity with which he navigated, exploited, and shaped personal and professional networks in mathematics and science, and the length and public profile of his career. Hutton corresponded nationally and internationally, and his correspondence illustrates the overlapping, intersection, and interaction of the different networks in which Hutton moved. It therefore provides new information about how Georgian mathematics was structured socially and how mathematical careers worked in that period. It provides a rare and valuable view of a mathematical culture that would substantially cease to exist when British mathematics embraced continental methods from the early nineteenth century onwards.Less

Benjamin Wardhaugh

Published in print: 2017-11-30

This book contains complete transcriptions, with notes, of the 133 surviving letters of Charles Hutton (1737–1823). The letters span the period 1770–1823 and are drawn from nearly thirty different archives. Most have not been published before. Hutton was one of the most prominent British mathematicians of his generation. He played roles at the Royal Society, the Royal Military Academy, the Board of Longitude, the ‘philomath’ network, and elsewhere. He worked on the explosive force of gunpowder and the mean density of the earth, winning the Royal Society’s Copley Medal in 1778; he was also at the focus of a celebrated row at the Royal Society in 1784 over the place of mathematics there. He is of particular historical interest because of the variety of roles he played in British mathematics, the dexterity with which he navigated, exploited, and shaped personal and professional networks in mathematics and science, and the length and public profile of his career. Hutton corresponded nationally and internationally, and his correspondence illustrates the overlapping, intersection, and interaction of the different networks in which Hutton moved. It therefore provides new information about how Georgian mathematics was structured socially and how mathematical careers worked in that period. It provides a rare and valuable view of a mathematical culture that would substantially cease to exist when British mathematics embraced continental methods from the early nineteenth century onwards.

This book is a history of the development of mathematical astronomy in China, from the late third century BCE, to the early third century CE—a period often referred to as ‘early imperial China’. It ...
More

This book is a history of the development of mathematical astronomy in China, from the late third century BCE, to the early third century CE—a period often referred to as ‘early imperial China’. It narrates the changes in ways of understanding the movements of the heavens and the heavenly bodies that took place during those four and a half centuries, and tells the stories of the institutions and individuals involved in those changes. It gives clear explanations of technical practice in observation, instrumentation and calculation, and the steady accumulation of data over many years—but it centres on the activity of the individual human beings who observed the heavens, recorded what they saw, and made calculations to analyse and eventually make predictions about the motions of the celestial bodies. It is these individuals, their observations, their calculations and the words they left to us that provide the narrative thread that runs through this work. Throughout the book, the author gives clear translations of original material that allow the reader direct access to what the people in this book said about themselves and what they tried to do. This book is designed to be accessible to a broad readership interested in the history of science, the history of China and the comparative history of ancient cultures, while still being useful to specialists in the history of astronomy.Less

Heavenly Numbers : Astronomy and Authority in Early Imperial China

Christopher Cullen

Published in print: 2017-11-16

This book is a history of the development of mathematical astronomy in China, from the late third century BCE, to the early third century CE—a period often referred to as ‘early imperial China’. It narrates the changes in ways of understanding the movements of the heavens and the heavenly bodies that took place during those four and a half centuries, and tells the stories of the institutions and individuals involved in those changes. It gives clear explanations of technical practice in observation, instrumentation and calculation, and the steady accumulation of data over many years—but it centres on the activity of the individual human beings who observed the heavens, recorded what they saw, and made calculations to analyse and eventually make predictions about the motions of the celestial bodies. It is these individuals, their observations, their calculations and the words they left to us that provide the narrative thread that runs through this work. Throughout the book, the author gives clear translations of original material that allow the reader direct access to what the people in this book said about themselves and what they tried to do. This book is designed to be accessible to a broad readership interested in the history of science, the history of China and the comparative history of ancient cultures, while still being useful to specialists in the history of astronomy.

This book is suitable for a first-year graduate course on Non-linear Elasticity Theory. It is aimed at graduate students, post-doctoral fellows and researchers working in Mechanics. Included is a ...
More

This book is suitable for a first-year graduate course on Non-linear Elasticity Theory. It is aimed at graduate students, post-doctoral fellows and researchers working in Mechanics. Included is a modern treatment of elementary plasticity theory emphasizing the foundational role played by finite elasticity. The book covers fundamental and advanced material that should be mastered before embarking on research. Included are the concepts of frame invariance, material symmetry, kinematic constraints, a development of nonlinear membrane theory, energy minimizers as stable equilibria and various attendant convexity conditions.Less

Finite Elasticity Theory

David J. Steigmann

Published in print: 2017-08-17

This book is suitable for a first-year graduate course on Non-linear Elasticity Theory. It is aimed at graduate students, post-doctoral fellows and researchers working in Mechanics. Included is a modern treatment of elementary plasticity theory emphasizing the foundational role played by finite elasticity. The book covers fundamental and advanced material that should be mastered before embarking on research. Included are the concepts of frame invariance, material symmetry, kinematic constraints, a development of nonlinear membrane theory, energy minimizers as stable equilibria and various attendant convexity conditions.

This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates ...
More

This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.Less

Non-Standard Parametric Statistical Inference

Russell Cheng

Published in print: 2017-06-22

This book discusses the fitting of parametric statistical models to data samples. Emphasis is placed on (i) how to recognize situations where the problem is non-standard, when parameter estimates behave unusually, and (ii) the use of parametric bootstrap resampling methods in analysing such problems. Simple and practical model building is an underlying theme. A frequentist viewpoint based on likelihood is adopted, for which there is a well-established and very practical theory. The standard situation is where certain widely applicable regularity conditions hold. However, there are many apparently innocuous situations where standard theory breaks down, sometimes spectacularly. Most of the departures from regularity are described geometrically in the book, with mathematical detail only sufficient to clarify the non-standard nature of a problem and to allow formulation of practical solutions. The book is intended for anyone with a basic knowledge of statistical methods typically covered in a university statistical inference course who wishes to understand or study how standard methodology might fail. Simple, easy-to-understand statistical methods are presented which overcome these difficulties, and illustrated by detailed examples drawn from real applications. Parametric bootstrap resampling is used throughout for analysing the properties of fitted models, illustrating its ease of implementation even in non-standard situations. Distributional properties are obtained numerically for estimators or statistics not previously considered in the literature because their theoretical distributional properties are too hard to obtain theoretically. Bootstrap results are presented mainly graphically in the book, providing easy-to-understand demonstration of the sampling behaviour of estimators.

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents a comprehensive introduction to the role that cryptography plays in providing ...
More

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents a comprehensive introduction to the role that cryptography plays in providing information security for technologies such as the Internet, mobile phones, payment cards, and wireless local area networks. Focusing on the fundamental principles that ground modern cryptography as they arise in modern applications, it avoids both an over-reliance on transient technologies and overwhelming theoretical research. The first part of the book provides essential background, identifying the core security services provided by cryptography. The next part introduces the main cryptographic mechanisms that deliver these security services such as encryption, hash functions, and digital signatures, discussing why they work and how to deploy them, without delving into any significant mathematical detail. In the third part, the important practical aspects of key management are introduced, which is essential for making cryptography work in real systems. The last part considers the application of cryptography. A range of application case studies is presented, alongside a discussion of the wider societal issues arising from use of cryptography to support contemporary cyber security.Less

Everyday Cryptography : Fundamental Principles and Applications

Keith Martin

Published in print: 2017-06-08

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents a comprehensive introduction to the role that cryptography plays in providing information security for technologies such as the Internet, mobile phones, payment cards, and wireless local area networks. Focusing on the fundamental principles that ground modern cryptography as they arise in modern applications, it avoids both an over-reliance on transient technologies and overwhelming theoretical research. The first part of the book provides essential background, identifying the core security services provided by cryptography. The next part introduces the main cryptographic mechanisms that deliver these security services such as encryption, hash functions, and digital signatures, discussing why they work and how to deploy them, without delving into any significant mathematical detail. In the third part, the important practical aspects of key management are introduced, which is essential for making cryptography work in real systems. The last part considers the application of cryptography. A range of application case studies is presented, alongside a discussion of the wider societal issues arising from use of cryptography to support contemporary cyber security.

Students in the sciences, economics, social sciences, and medicine take an introductory statistics course. And yet statistics can be notoriously difficult for instructors to teach and for students to ...
More

Students in the sciences, economics, social sciences, and medicine take an introductory statistics course. And yet statistics can be notoriously difficult for instructors to teach and for students to learn. To help overcome these challenges, Gelman and Nolan have put together this fascinating and thought-provoking book. Based on years of teaching experience the book provides a wealth of demonstrations, activities, examples and projects that involve active student participation. Part I of the book presents a large selection of activities for introductory statistics courses and has chapters such as ‘First week of class’- with exercises to break the ice and get students talking; then descriptive statistics, graphics, linear regression, data collection (sampling and experimentation), probability, inference, and statistical communication. Part II gives tips on what works and what doesn’t, how to set up effective demonstrations, how to encourage students to participate in class and to work effectively in group projects. Course plans for introductory statistics, statistics for social scientists, and communication and graphics are provided. Part III presents material for more advanced courses on topics such as decision theory, Bayesian statistics, sampling, and data science.Less

Teaching Statistics : A Bag of Tricks

Andrew GelmanDeborah Nolan

Published in print: 2017-05-04

Students in the sciences, economics, social sciences, and medicine take an introductory statistics course. And yet statistics can be notoriously difficult for instructors to teach and for students to learn. To help overcome these challenges, Gelman and Nolan have put together this fascinating and thought-provoking book. Based on years of teaching experience the book provides a wealth of demonstrations, activities, examples and projects that involve active student participation. Part I of the book presents a large selection of activities for introductory statistics courses and has chapters such as ‘First week of class’- with exercises to break the ice and get students talking; then descriptive statistics, graphics, linear regression, data collection (sampling and experimentation), probability, inference, and statistical communication. Part II gives tips on what works and what doesn’t, how to set up effective demonstrations, how to encourage students to participate in class and to work effectively in group projects. Course plans for introductory statistics, statistics for social scientists, and communication and graphics are provided. Part III presents material for more advanced courses on topics such as decision theory, Bayesian statistics, sampling, and data science.

Over the past number of years powerful new methods in analysis and topology have led to the development of the modern global theory of symplectic topology, including several striking and important ...
More

Over the past number of years powerful new methods in analysis and topology have led to the development of the modern global theory of symplectic topology, including several striking and important results. The first edition of Introduction to Symplectic Topology was published in 1995. The book was the first comprehensive introduction to the subject and became a key text in the area. In 1998, a significantly revised second edition contained new sections and updates. This third edition includes both further updates and new material on this fast-developing area. All chapters have been revised to improve the exposition, new material has been added in many places, and various proofs have been tightened up. Copious new references to key papers have been added to the bibliography. In particular, the material on contact geometry has been significantly expanded, many more details on linear complex structures and on the symplectic blowup and blowdown have been added, the section on J-holomorphic curves in Chapter 4 has been thoroughly revised, there are new sections on GIT and on the topology of symplectomorphism groups, and the section on Floer homology has been revised and updated. Chapter 13 has been completely rewritten and has a new title (Questions of Existence and Uniqueness). It now contains an introduction to existence and uniqueness problems in symplectic topology, a section describing various examples, an overview of Taubes–Seiberg–Witten theory and its applications to symplectic topology, and a section on symplectic 4-manifolds. Chapter 14 on open problems has been added.Less

Introduction to Symplectic Topology

Dusa McDuffDietmar Salamon

Published in print: 2017-03-23

Over the past number of years powerful new methods in analysis and topology have led to the development of the modern global theory of symplectic topology, including several striking and important results. The first edition of Introduction to Symplectic Topology was published in 1995. The book was the first comprehensive introduction to the subject and became a key text in the area. In 1998, a significantly revised second edition contained new sections and updates. This third edition includes both further updates and new material on this fast-developing area. All chapters have been revised to improve the exposition, new material has been added in many places, and various proofs have been tightened up. Copious new references to key papers have been added to the bibliography. In particular, the material on contact geometry has been significantly expanded, many more details on linear complex structures and on the symplectic blowup and blowdown have been added, the section on J-holomorphic curves in Chapter 4 has been thoroughly revised, there are new sections on GIT and on the topology of symplectomorphism groups, and the section on Floer homology has been revised and updated. Chapter 13 has been completely rewritten and has a new title (Questions of Existence and Uniqueness). It now contains an introduction to existence and uniqueness problems in symplectic topology, a section describing various examples, an overview of Taubes–Seiberg–Witten theory and its applications to symplectic topology, and a section on symplectic 4-manifolds. Chapter 14 on open problems has been added.

This volume contains a collection of papers based on lectures delivered by distinguished mathematicians at Clay Mathematics Institute events over the past few years. Although not explicitly linked, ...
More

This volume contains a collection of papers based on lectures delivered by distinguished mathematicians at Clay Mathematics Institute events over the past few years. Although not explicitly linked, the topics in this volume have a common flavour and a common appeal to all who are interested in recent developments in geometry. They are intended to be accessible to all who work in this general area, regardless of their own particular research interests.Less

Lectures on Geometry

Published in print: 2017-01-26

This volume contains a collection of papers based on lectures delivered by distinguished mathematicians at Clay Mathematics Institute events over the past few years. Although not explicitly linked, the topics in this volume have a common flavour and a common appeal to all who are interested in recent developments in geometry. They are intended to be accessible to all who work in this general area, regardless of their own particular research interests.

Direct Methods for Sparse Matrices, second edition, is a complete rewrite of the first edition published 30 years ago. Much has changed since that time. Problems have grown greatly in size and ...
More

Direct Methods for Sparse Matrices, second edition, is a complete rewrite of the first edition published 30 years ago. Much has changed since that time. Problems have grown greatly in size and complexity; nearly all our examples were of order less than 5,000 in the first edition, and are often more than a million in the second edition. Computer architectures are now much more complex, requiring new ways of adapting algorithms to parallel environments with memory hierarchies. Because the area is such an important one to all of computational science and engineering, a huge amount of research has been done since the first edition, some of it by the authors. This new research is integrated into the text with a clear explanation of the underlying mathematics and algorithms. New research that is described includes new techniques for scaling and error control, new orderings, new combinatorial techniques for partitioning both symmetric and unsymmetric problems, and a detailed description of the multifrontal approach to solving systems that was pioneered by the research of the authors and colleagues. This includes a discussion of techniques for exploiting parallel architectures and new work for indefinite and unsymmetric systems.Less

Direct Methods for Sparse Matrices

I. S. DuffA. M. ErismanJ. K. Reid

Published in print: 2017-01-26

Direct Methods for Sparse Matrices, second edition, is a complete rewrite of the first edition published 30 years ago. Much has changed since that time. Problems have grown greatly in size and complexity; nearly all our examples were of order less than 5,000 in the first edition, and are often more than a million in the second edition. Computer architectures are now much more complex, requiring new ways of adapting algorithms to parallel environments with memory hierarchies. Because the area is such an important one to all of computational science and engineering, a huge amount of research has been done since the first edition, some of it by the authors. This new research is integrated into the text with a clear explanation of the underlying mathematics and algorithms. New research that is described includes new techniques for scaling and error control, new orderings, new combinatorial techniques for partitioning both symmetric and unsymmetric problems, and a detailed description of the multifrontal approach to solving systems that was pioneered by the research of the authors and colleagues. This includes a discussion of techniques for exploiting parallel architectures and new work for indefinite and unsymmetric systems.

Inductive logic (also known as confirmation theory) seeks to determine the extent to which the premisses of an argument entail its conclusion. This book offers an introduction to the field of ...
More

Inductive logic (also known as confirmation theory) seeks to determine the extent to which the premisses of an argument entail its conclusion. This book offers an introduction to the field of inductive logic and develops a new Bayesian inductive logic. Chapter 1 introduces perhaps the simplest and most natural account of inductive logic, classical inductive logic, which is attributable to Ludwig Wittgenstein. Classical inductive logic is seen to fail in a crucial way, so there is a need to develop more sophisticated inductive logics. Chapter 2 presents enough logic and probability theory for the reader to begin to study inductive logic, while Chapter 3 introduces the ways in which logic and probability can be combined in an inductive logic. Chapter 4 analyses the most influential approach to inductive logic, due to W.E. Johnson and Rudolf Carnap. Again, this logic is seen to be inadequate. Chapter 5 shows how an alternative approach to inductive logic follows naturally from the philosophical theory of objective Bayesian epistemology. This approach preserves the inferences that classical inductive logic gets right (Chapter 6). On the other hand, it also offers a way out of the problems that beset classical inductive logic (Chapter 7). Chapter 8 defends the approach by tackling several key criticisms that are often levelled at inductive logic. Chapter 9 presents a formal justification of the version of objective Bayesianism which underpins the approach. Chapter 10 explains what has been achieved and poses some open questions.Less

Lectures on Inductive Logic

Jon Williamson

Published in print: 2017-01-05

Inductive logic (also known as confirmation theory) seeks to determine the extent to which the premisses of an argument entail its conclusion. This book offers an introduction to the field of inductive logic and develops a new Bayesian inductive logic. Chapter 1 introduces perhaps the simplest and most natural account of inductive logic, classical inductive logic, which is attributable to Ludwig Wittgenstein. Classical inductive logic is seen to fail in a crucial way, so there is a need to develop more sophisticated inductive logics. Chapter 2 presents enough logic and probability theory for the reader to begin to study inductive logic, while Chapter 3 introduces the ways in which logic and probability can be combined in an inductive logic. Chapter 4 analyses the most influential approach to inductive logic, due to W.E. Johnson and Rudolf Carnap. Again, this logic is seen to be inadequate. Chapter 5 shows how an alternative approach to inductive logic follows naturally from the philosophical theory of objective Bayesian epistemology. This approach preserves the inferences that classical inductive logic gets right (Chapter 6). On the other hand, it also offers a way out of the problems that beset classical inductive logic (Chapter 7). Chapter 8 defends the approach by tackling several key criticisms that are often levelled at inductive logic. Chapter 9 presents a formal justification of the version of objective Bayesianism which underpins the approach. Chapter 10 explains what has been achieved and poses some open questions.

Real analysis in its modern aspect is presented concisely in this text for the beginning graduate student of mathematics and related disciplines to have a solid grounding in the general theory of ...
More

Real analysis in its modern aspect is presented concisely in this text for the beginning graduate student of mathematics and related disciplines to have a solid grounding in the general theory of measure and to build helpful insights for effectively applying the general principles of real analysis to concrete problems. After an introductory chapter, a compact but precise treatment of general measure and integration is undertaken to provide the reader with an overall view of the general theory before delving into special measures. The universality of the method of outer measure in the construction of measures is emphasized, because it provides a unified way of looking for useful regularity properties of measures. The chapter on functions of real variables is the core of the book; it treats properties of functions that are not only basic for understanding the general features of functions but also relevant for the study of those function spaces which are important when application of functional analytical methods is in question. The chapter on basic principles of functional analysis and that on the Fourier integral reveal the intimate interplay between functional analysis and real analysis. Applications of many of the topics discussed are included; these contain explorations toward probability theory and partial differential equations.Less

Real Analysis

Fon-Che Liu

Published in print: 2016-10-27

Real analysis in its modern aspect is presented concisely in this text for the beginning graduate student of mathematics and related disciplines to have a solid grounding in the general theory of measure and to build helpful insights for effectively applying the general principles of real analysis to concrete problems. After an introductory chapter, a compact but precise treatment of general measure and integration is undertaken to provide the reader with an overall view of the general theory before delving into special measures. The universality of the method of outer measure in the construction of measures is emphasized, because it provides a unified way of looking for useful regularity properties of measures. The chapter on functions of real variables is the core of the book; it treats properties of functions that are not only basic for understanding the general features of functions but also relevant for the study of those function spaces which are important when application of functional analytical methods is in question. The chapter on basic principles of functional analysis and that on the Fourier integral reveal the intimate interplay between functional analysis and real analysis. Applications of many of the topics discussed are included; these contain explorations toward probability theory and partial differential equations.

A starting point of Bolzano’s logical reflection was the conviction that among truths there is a connection, according to which some truths are grounds of others, and these in turn are consequences ...
More

A starting point of Bolzano’s logical reflection was the conviction that among truths there is a connection, according to which some truths are grounds of others, and these in turn are consequences of the former, and that such a connection is objective, i.e. subsisting independently of every cognitive activity of the subject. In the attempt to account for the distinction between subjective and objective levels of knowledge, Bolzano gradually gained the conviction that the reference of the subject to the object is mediated by a realm of entities without existence that, recalling the Stoic lectà, are here called ‘lectological’. Moreover, of the two main ways through which that reference takes place—psychic activity and linguistic activity—Bolzano favoured the first and traced back to it the problems of the second; i.e. he considered those intermediate entities first as possible content of psychic phenomena and only subordinately, on the basis of a complex theory of signs, as meanings of linguistic phenomena. This book follows this schema and treats, in great detail, first, lectological entities (ideas and propositions in themselves), second, cognitive psychic phenomena (subjective ideas and judgements), and, finally, linguistic phenomena. Moreover, it tries to bring to light the extraordinary systematic character of Bolzano’s logical thought and it does this showing that the main logical ideas developed principally in the first three parts of the Theory of Science, published in 1837, can be effortlessly formally presented within the well-known Hilbertian epsilon-calculus.Less

Bolzano's Logical System

Ettore Casari

Published in print: 2016-09-15

A starting point of Bolzano’s logical reflection was the conviction that among truths there is a connection, according to which some truths are grounds of others, and these in turn are consequences of the former, and that such a connection is objective, i.e. subsisting independently of every cognitive activity of the subject. In the attempt to account for the distinction between subjective and objective levels of knowledge, Bolzano gradually gained the conviction that the reference of the subject to the object is mediated by a realm of entities without existence that, recalling the Stoic lectà, are here called ‘lectological’. Moreover, of the two main ways through which that reference takes place—psychic activity and linguistic activity—Bolzano favoured the first and traced back to it the problems of the second; i.e. he considered those intermediate entities first as possible content of psychic phenomena and only subordinately, on the basis of a complex theory of signs, as meanings of linguistic phenomena. This book follows this schema and treats, in great detail, first, lectological entities (ideas and propositions in themselves), second, cognitive psychic phenomena (subjective ideas and judgements), and, finally, linguistic phenomena. Moreover, it tries to bring to light the extraordinary systematic character of Bolzano’s logical thought and it does this showing that the main logical ideas developed principally in the first three parts of the Theory of Science, published in 1837, can be effortlessly formally presented within the well-known Hilbertian epsilon-calculus.

This book present the topology of smooth 4-manifolds in an intuitive self-contained way. The handlebody theory, and the seiberg-witten theory of 4-manifolds are presented. Also stein and symplectic ...
More

This book present the topology of smooth 4-manifolds in an intuitive self-contained way. The handlebody theory, and the seiberg-witten theory of 4-manifolds are presented. Also stein and symplectic structures on 4-manifolds are discussed, and many recent applications are given.Less

4-Manifolds

Selman Akbulut

Published in print: 2016-08-25

This book present the topology of smooth 4-manifolds in an intuitive self-contained way. The handlebody theory, and the seiberg-witten theory of 4-manifolds are presented. Also stein and symplectic structures on 4-manifolds are discussed, and many recent applications are given.

The logician Kurt Gödel in 1951 established a disjunctive thesis about the scope and limits of mathematical knowledge: either the mathematical mind is equivalent to a Turing machine (i.e., a ...
More

The logician Kurt Gödel in 1951 established a disjunctive thesis about the scope and limits of mathematical knowledge: either the mathematical mind is equivalent to a Turing machine (i.e., a computer) or there are absolutely undecidable mathematical problems. In the second half of the twentieth century, attempts have been made to arrive at a stronger conclusion. In particular, arguments have been produced by the philosopher J.R. Lucas and by the physicist and mathematician Roger Penrose that intend to show that the mathematicalmind ismore powerful than any computer. These arguments, and counterarguments to them, have not convinced the logical and philosophical community. The reason for this is an insufficiency of rigour in the debate. The contributions in this volume move the debate forward by formulating rigorous frameworks and formally spelling out and evaluating arguments that bear on Gödel’s disjunction in these frameworks. The contributions in this volume have been written by world leading experts in the field.Less

Gödel's Disjunction : The scope and limits of mathematical knowledge

Published in print: 2016-08-11

The logician Kurt Gödel in 1951 established a disjunctive thesis about the scope and limits of mathematical knowledge: either the mathematical mind is equivalent to a Turing machine (i.e., a computer) or there are absolutely undecidable mathematical problems. In the second half of the twentieth century, attempts have been made to arrive at a stronger conclusion. In particular, arguments have been produced by the philosopher J.R. Lucas and by the physicist and mathematician Roger Penrose that intend to show that the mathematicalmind ismore powerful than any computer. These arguments, and counterarguments to them, have not convinced the logical and philosophical community. The reason for this is an insufficiency of rigour in the debate. The contributions in this volume move the debate forward by formulating rigorous frameworks and formally spelling out and evaluating arguments that bear on Gödel’s disjunction in these frameworks. The contributions in this volume have been written by world leading experts in the field.

Proving in the Elementary Mathematics Classroom addresses a fundamental problem in children’s learning that has received relatively little research attention: Although proving and related concepts ...
More

Proving in the Elementary Mathematics Classroom addresses a fundamental problem in children’s learning that has received relatively little research attention: Although proving and related concepts (e.g., proof, argumentation, conjecturing) are core to mathematics as a sense-making activity, they currently have a marginal place in elementary classrooms internationally. This book takes a step toward addressing this problem by examining how the place of proving in elementary students’ mathematical work can be elevated through the purposeful design and implementation of mathematics tasks, specifically proving tasks. In particular, the book draws on relevant research and theory and classroom episodes with 8–9-year-olds from England and the United States to examine different kinds of proving tasks and the proving activity they can help generate in the elementary classroom. It examines further the role of elementary teachers in mediating the relationship between proving tasks and proving activity, including major mathematical and pedagogical issues that can arise for them as they implement each kind of proving task in the classroom. In addition to its research contribution in the intersection of the scholarly areas of teaching/learning proving and task design/implementation, the book has important implications for teaching, curricular resources, and teacher education. For example, the book identifies different kinds of proving tasks whose balanced representation in the mathematics classroom and in curricular resources can support a rounded set of learning experiences for elementary students related to proving. It identifies further important mathematical ideas and pedagogical practices related to proving that can be studied in teacher education.Less

Proving in the Elementary Mathematics Classroom

Andreas J. Stylianides

Published in print: 2016-07-21

Proving in the Elementary Mathematics Classroom addresses a fundamental problem in children’s learning that has received relatively little research attention: Although proving and related concepts (e.g., proof, argumentation, conjecturing) are core to mathematics as a sense-making activity, they currently have a marginal place in elementary classrooms internationally. This book takes a step toward addressing this problem by examining how the place of proving in elementary students’ mathematical work can be elevated through the purposeful design and implementation of mathematics tasks, specifically proving tasks. In particular, the book draws on relevant research and theory and classroom episodes with 8–9-year-olds from England and the United States to examine different kinds of proving tasks and the proving activity they can help generate in the elementary classroom. It examines further the role of elementary teachers in mediating the relationship between proving tasks and proving activity, including major mathematical and pedagogical issues that can arise for them as they implement each kind of proving task in the classroom. In addition to its research contribution in the intersection of the scholarly areas of teaching/learning proving and task design/implementation, the book has important implications for teaching, curricular resources, and teacher education. For example, the book identifies different kinds of proving tasks whose balanced representation in the mathematics classroom and in curricular resources can support a rounded set of learning experiences for elementary students related to proving. It identifies further important mathematical ideas and pedagogical practices related to proving that can be studied in teacher education.

Curricular Resources and Classroom Use examines the use of curricular resources, that is, the different kinds of materials (digital or physical) that teachers use in or for their teaching (textbooks, ...
More

Curricular Resources and Classroom Use examines the use of curricular resources, that is, the different kinds of materials (digital or physical) that teachers use in or for their teaching (textbooks, lesson plans, etc.). These resources have a significant influence on students’ opportunities to learn. At the same time, teachers play a crucial role as interpreters and users of curricular resources, so there is a complex relationship between curricular resources and their classroom use. Research thus far has mostly focused on developing approaches for studying either particular curricular resources or their classroom use. This book aims to bridge these highly related programs of research by describing, comparing, and exemplifying new research approaches for studying curricular resources and their classroom use, as well as the complex interplay between the two. This book exemplifies the approaches in the area of mathematics, but the approaches can be more broadly applicable and be used in isomorphic ways in other subject areas (science, history, etc.). As issues concerning curricular resources and the classroom use of such resources are of interest to researchers, curriculum developers (such as textbook authors), and teacher educators in many countries, this book is addressed to a broad international audience. In addition to providing implications for research, this book has implications for curriculum development and teacher education. Specifically, this book deepens understanding of how curriculum developers can better exploit the potential of curricular resources to support classroom work, and how teacher educators can better support teachers to use curricular resources in the classroom.Less

Curricular Resources and Classroom Use : The Case of Mathematics

Gabriel J. Stylianides

Published in print: 2016-05-01

Curricular Resources and Classroom Use examines the use of curricular resources, that is, the different kinds of materials (digital or physical) that teachers use in or for their teaching (textbooks, lesson plans, etc.). These resources have a significant influence on students’ opportunities to learn. At the same time, teachers play a crucial role as interpreters and users of curricular resources, so there is a complex relationship between curricular resources and their classroom use. Research thus far has mostly focused on developing approaches for studying either particular curricular resources or their classroom use. This book aims to bridge these highly related programs of research by describing, comparing, and exemplifying new research approaches for studying curricular resources and their classroom use, as well as the complex interplay between the two. This book exemplifies the approaches in the area of mathematics, but the approaches can be more broadly applicable and be used in isomorphic ways in other subject areas (science, history, etc.). As issues concerning curricular resources and the classroom use of such resources are of interest to researchers, curriculum developers (such as textbook authors), and teacher educators in many countries, this book is addressed to a broad international audience. In addition to providing implications for research, this book has implications for curriculum development and teacher education. Specifically, this book deepens understanding of how curriculum developers can better exploit the potential of curricular resources to support classroom work, and how teacher educators can better support teachers to use curricular resources in the classroom.

The immense problems of the twenty-first century invite innovative thinking from students, academic researchers, business research managers, and government policymakers. Hopes for raising quality in ...
More

The immense problems of the twenty-first century invite innovative thinking from students, academic researchers, business research managers, and government policymakers. Hopes for raising quality in healthcare delivery, securing community safety, expanding food production, improving environmental sustainability, and much more depend on pervasive application of research solutions. This book recognizes the unbounded nature of human creativity, the multiplicative power of teamwork, and the catalytic effects of innovation. Contemporary science, engineering, and design research teams get a further boost from fresh ways of using the Web, social media, and visual communications tools that amplify collaborations. The applied and basic research heroes who take on the immense problems of the present time face bigger-than-ever challenges, but if they adopt potent guiding principles and effective research life cycle strategies, they can produce the advances that will enhance the lives of many people. These inspirational research leaders will break free from traditional thinking, disciplinary boundaries, and narrow aspirations. They will be bold innovators and engaged collaborators who are ready to lead yet open to new ideas, and self-confident yet empathetic to others. This book reports on the growing number of initiatives to promote integrated approaches to research and the expansion of these efforts. Less

The New ABCs of Research : Achieving Breakthrough Collaborations

Ben Shneiderman

Published in print: 2016-02-01

The immense problems of the twenty-first century invite innovative thinking from students, academic researchers, business research managers, and government policymakers. Hopes for raising quality in healthcare delivery, securing community safety, expanding food production, improving environmental sustainability, and much more depend on pervasive application of research solutions. This book recognizes the unbounded nature of human creativity, the multiplicative power of teamwork, and the catalytic effects of innovation. Contemporary science, engineering, and design research teams get a further boost from fresh ways of using the Web, social media, and visual communications tools that amplify collaborations. The applied and basic research heroes who take on the immense problems of the present time face bigger-than-ever challenges, but if they adopt potent guiding principles and effective research life cycle strategies, they can produce the advances that will enhance the lives of many people. These inspirational research leaders will break free from traditional thinking, disciplinary boundaries, and narrow aspirations. They will be bold innovators and engaged collaborators who are ready to lead yet open to new ideas, and self-confident yet empathetic to others. This book reports on the growing number of initiatives to promote integrated approaches to research and the expansion of these efforts.

This book challenges the widely held but oversimplified and even dangerous conception that progress in science and technology is our salvation, and the more of it, the better. The future will offer ...
More

This book challenges the widely held but oversimplified and even dangerous conception that progress in science and technology is our salvation, and the more of it, the better. The future will offer huge changes due to such progress, but it is not certain that all changes will be for the better. The unprecedented rate of technological development that the 20th century witnessed has made our lives today vastly different from those in 1900. No slowdown is in sight, and the 21st century will most likely see even more revolutionary changes than the 20th, due to advances in science, technology and medicine. Areas where extraordinary and perhaps disruptive advances can be expected include biotechnology, nanotechnology and machine intelligence. We may also look forward to various ways to enhance human cognitive and other abilities using pharmaceuticals, genetic engineering or machine–brain interfaces—perhaps to the extent of changing human nature beyond what we currently think of as human, and into a posthuman era. The potential benefits of all these technologies are enormous, but so are the risks, including the possibility of human extinction. The currently dominant attitude towards scientific and technological advances is tantamount to running blindfold and at full speed into a minefield. This book is a passionate plea for doing our best to map the territories ahead of us, and for acting with foresight, so as to maximize our chances of reaping the benefits of the new technologies while avoiding the dangers.Less

Here Be Dragons : Science, Technology and the Future of Humanity

Olle Häggström

Published in print: 2016-01-01

This book challenges the widely held but oversimplified and even dangerous conception that progress in science and technology is our salvation, and the more of it, the better. The future will offer huge changes due to such progress, but it is not certain that all changes will be for the better. The unprecedented rate of technological development that the 20th century witnessed has made our lives today vastly different from those in 1900. No slowdown is in sight, and the 21st century will most likely see even more revolutionary changes than the 20th, due to advances in science, technology and medicine. Areas where extraordinary and perhaps disruptive advances can be expected include biotechnology, nanotechnology and machine intelligence. We may also look forward to various ways to enhance human cognitive and other abilities using pharmaceuticals, genetic engineering or machine–brain interfaces—perhaps to the extent of changing human nature beyond what we currently think of as human, and into a posthuman era. The potential benefits of all these technologies are enormous, but so are the risks, including the possibility of human extinction. The currently dominant attitude towards scientific and technological advances is tantamount to running blindfold and at full speed into a minefield. This book is a passionate plea for doing our best to map the territories ahead of us, and for acting with foresight, so as to maximize our chances of reaping the benefits of the new technologies while avoiding the dangers.

This book provides an abstract theory of Feynman’s operational calculus for functions of (typically) noncommuting operators. Although it is inspired by Feynman’s original heuristic suggestions and ...
More

This book provides an abstract theory of Feynman’s operational calculus for functions of (typically) noncommuting operators. Although it is inspired by Feynman’s original heuristic suggestions and time-ordering (or disentangling) rules in his seminal 1951 paper, as is made clear in the introduction (Chapter 1) and elsewhere in the text, the theory developed in this book also goes well beyond them in a number of directions which were not anticipated in Feynman’s work. In particular, the work presented in this volume is oriented towards dealing with abstract and (typically) noncommuting linear operators acting on some Banach space, rather than operators arising from some variety of path integration. Some of the key structures developed in this volume enable us to obtain, in some sense, an appropriate abstract substitute for a generalized functional integral associated with the Feynman operational calculus attached to a given n-tuple of pairs {(Aj,μj)}j=1n of typically noncommuting bounded operators Aj and probability measures μ‎j, for j = 1, …, n and n ≥ 2.Less

Gerald W JohnsonMichel L. LapidusLance Nielsen

Published in print: 2015-08-01

This book provides an abstract theory of Feynman’s operational calculus for functions of (typically) noncommuting operators. Although it is inspired by Feynman’s original heuristic suggestions and time-ordering (or disentangling) rules in his seminal 1951 paper, as is made clear in the introduction (Chapter 1) and elsewhere in the text, the theory developed in this book also goes well beyond them in a number of directions which were not anticipated in Feynman’s work. In particular, the work presented in this volume is oriented towards dealing with abstract and (typically) noncommuting linear operators acting on some Banach space, rather than operators arising from some variety of path integration. Some of the key structures developed in this volume enable us to obtain, in some sense, an appropriate abstract substitute for a generalized functional integral associated with the Feynman operational calculus attached to a given n-tuple of pairs {(Aj,μj)}j=1n of typically noncommuting bounded operators Aj and probability measures μ‎j, for j = 1, …, n and n ≥ 2.

This book, which is the first volume of two, presents a comprehensive treatment of aspects of classical and modern analysis relating to theory of ‘partial differential equations’ and the associated ...
More

This book, which is the first volume of two, presents a comprehensive treatment of aspects of classical and modern analysis relating to theory of ‘partial differential equations’ and the associated ‘function spaces’. It begins with a quick review of basic properties of harmonic functions and Poisson integrals and then moves into a detailed study of Hardy spaces. The classical Dirichlet problem is considered and a variety of methods for its resolution ranging from potential theoretic (Perron’s method of sub-harmonic functions and Wiener’s criterion, Green’s functions and Poisson integrals, the method of layered potentials or integral equations) to variational (Dirichlet principle) are presented. Parallel to this is the development of the necessary function spaces: Lorentz and Marcinkiewicz spaces, Sobolev spaces (integer as well as fractional order), Hardy spaces, the John-Nirenberg space BMO, Morrey and Campanato spaces, Besov spaces and Triebel-Lizorkin spaces. Harmonic analysis is deeply intertwined with the topics covered and the subjects of summability methods, Tauberian theorems, convolution algebras, Calderon-Zygmund theory of singular integrals and Littlewood-Paley theory that on the one hand connect to various PDE estimates (Calderon-Zygmund inequality, Strichartz estimates, Mihlin-Hormander multipliers, etc.) and on the other lead to a unified characterisation of various function spaces are discussed in great depth. The book ends by a discussion of regularity theory for second order elliptic equations in divergence form— first with continuous and next with measurable coefficients—and covers, in particular, De Giorgi’s theorem, Moser iteration, Harnack inequality and local boundedness of solutions. (The case of elliptic systems and related topics is discussed in the exercises.)Less

Ali Taheri

Published in print: 2015-07-01

This book, which is the first volume of two, presents a comprehensive treatment of aspects of classical and modern analysis relating to theory of ‘partial differential equations’ and the associated ‘function spaces’. It begins with a quick review of basic properties of harmonic functions and Poisson integrals and then moves into a detailed study of Hardy spaces. The classical Dirichlet problem is considered and a variety of methods for its resolution ranging from potential theoretic (Perron’s method of sub-harmonic functions and Wiener’s criterion, Green’s functions and Poisson integrals, the method of layered potentials or integral equations) to variational (Dirichlet principle) are presented. Parallel to this is the development of the necessary function spaces: Lorentz and Marcinkiewicz spaces, Sobolev spaces (integer as well as fractional order), Hardy spaces, the John-Nirenberg space BMO, Morrey and Campanato spaces, Besov spaces and Triebel-Lizorkin spaces. Harmonic analysis is deeply intertwined with the topics covered and the subjects of summability methods, Tauberian theorems, convolution algebras, Calderon-Zygmund theory of singular integrals and Littlewood-Paley theory that on the one hand connect to various PDE estimates (Calderon-Zygmund inequality, Strichartz estimates, Mihlin-Hormander multipliers, etc.) and on the other lead to a unified characterisation of various function spaces are discussed in great depth. The book ends by a discussion of regularity theory for second order elliptic equations in divergence form— first with continuous and next with measurable coefficients—and covers, in particular, De Giorgi’s theorem, Moser iteration, Harnack inequality and local boundedness of solutions. (The case of elliptic systems and related topics is discussed in the exercises.)

This book presents a comprehensive treatment of aspects of classical and modern analysis relating to theory of ‘partial differential equations’ and the associated ‘function spaces’. It begins with a ...
More

This book presents a comprehensive treatment of aspects of classical and modern analysis relating to theory of ‘partial differential equations’ and the associated ‘function spaces’. It begins with a quick review of basic properties of harmonic functions and Poisson integrals and then moves into a detailed study of Hardy spaces. The classical Dirichlet problem is considered and a variety of methods for its resolution ranging from potential theoretic (Perron’s method of sub-harmonic functions and Wiener’s criterion, Green’s functions and Poisson integrals, the method of layered potentials or integral equations) to variational (Dirichlet principle) are presented. Parallel to this is the development of the necessary function spaces: Lorentz and Marcinkiewicz spaces, Sobolev spaces (integer as well as fractional order), Hardy spaces, the John-Nirenberg space BMO, Morrey and Campanato spaces, Besov spaces and Triebel-Lizorkin spaces. Harmonic analysis is deeply intertwined with the topics covered and the subjects of summability methods, Tauberian theorems, convolution algebras, Calderon-Zygmund theory of singular integrals and Littlewood-Paley theory that on the one hand connect to various PDE estimates (Calderon-Zygmund inequality, Strichartz estimates, Mihlin-Hormander multipliers, etc.) and on the other lead to a unified characterisation of various function spaces are discussed in great depth. The book ends by a discussion of regularity theory for second order elliptic equations in divergence form— first with continuous and next with measurable coefficients—and covers, in particular, De Giorgi’s theorem, Moser iteration, Harnack inequality and local boundedness of solutions. (The case of elliptic systems and related topics is discussed in the exercises.)Less

Ali Taheri

Published in print: 2015-07-01

This book presents a comprehensive treatment of aspects of classical and modern analysis relating to theory of ‘partial differential equations’ and the associated ‘function spaces’. It begins with a quick review of basic properties of harmonic functions and Poisson integrals and then moves into a detailed study of Hardy spaces. The classical Dirichlet problem is considered and a variety of methods for its resolution ranging from potential theoretic (Perron’s method of sub-harmonic functions and Wiener’s criterion, Green’s functions and Poisson integrals, the method of layered potentials or integral equations) to variational (Dirichlet principle) are presented. Parallel to this is the development of the necessary function spaces: Lorentz and Marcinkiewicz spaces, Sobolev spaces (integer as well as fractional order), Hardy spaces, the John-Nirenberg space BMO, Morrey and Campanato spaces, Besov spaces and Triebel-Lizorkin spaces. Harmonic analysis is deeply intertwined with the topics covered and the subjects of summability methods, Tauberian theorems, convolution algebras, Calderon-Zygmund theory of singular integrals and Littlewood-Paley theory that on the one hand connect to various PDE estimates (Calderon-Zygmund inequality, Strichartz estimates, Mihlin-Hormander multipliers, etc.) and on the other lead to a unified characterisation of various function spaces are discussed in great depth. The book ends by a discussion of regularity theory for second order elliptic equations in divergence form— first with continuous and next with measurable coefficients—and covers, in particular, De Giorgi’s theorem, Moser iteration, Harnack inequality and local boundedness of solutions. (The case of elliptic systems and related topics is discussed in the exercises.)

This book presents the subject of turbulence. The aim of the book is to bridge the gap between the elementary, heuristic accounts of turbulence and the more rigorous accounts given. Throughout, the ...
More

This book presents the subject of turbulence. The aim of the book is to bridge the gap between the elementary, heuristic accounts of turbulence and the more rigorous accounts given. Throughout, the book combines the maximum of physical insight with the minimum of mathematical detail. This second edition covers a decade of advancement in the field, streamlining the original content while updating the sections where the subject has moved on. The expanded content includes large-scale dynamics, stratified & rotating turbulence, the increased power of direct numerical simulation, two-dimensional turbulence, Magnetohydrodynamics, and turbulence in the core of the Earth.Less

Turbulence : An Introduction for Scientists and Engineers

Peter Davidson

Published in print: 2015-06-01

This book presents the subject of turbulence. The aim of the book is to bridge the gap between the elementary, heuristic accounts of turbulence and the more rigorous accounts given. Throughout, the book combines the maximum of physical insight with the minimum of mathematical detail. This second edition covers a decade of advancement in the field, streamlining the original content while updating the sections where the subject has moved on. The expanded content includes large-scale dynamics, stratified & rotating turbulence, the increased power of direct numerical simulation, two-dimensional turbulence, Magnetohydrodynamics, and turbulence in the core of the Earth.

This book is an account of the theory and mathematical approaches in polymer entropy, with particular emphasis on mathematical approaches to directed and undirected lattice models. Results in the ...
More

This book is an account of the theory and mathematical approaches in polymer entropy, with particular emphasis on mathematical approaches to directed and undirected lattice models. Results in the scaling and critical behaviour of models of directed and undirected models of self-avoiding walks, paths, polygons, animals and networks are presented. The general theory of tricritical scaling is reviewed in the context of models of lattice clusters, and the existence of a thermodynamic limit in these models is discussed in general and for particular models. Mathematical approaches based on subadditive and convex functions, generating function methods and percolation theory are used to analyse models of adsorbing, collapsing and pulled walks and polygons in the hypercubic and in the hexagonal lattice. These methods show the existence of thermodynamic limits, pattern theorems, phase diagrams and critical points and give results on topological properties such as knotting and writhing in models of lattice polygons. The use of generating function methods and scaling in directed models is comprehensively reviewed in relation to scaling and phase behaviour in models of directed paths and polygons, including Dyck paths and models of convex polygons. Monte Carlo methods for the self-avoiding walk are discussed, with particular emphasis on dynamic algorithms such as the pivot and BFACF algorithms, and on kinetic growth algorithms such as the Rosenbluth algorithms and its variants, including the PERM, GARM and GAS algorithms.Less

The Statistical Mechanics of Interacting Walks, Polygons, Animals and Vesicles

E.J. Janse van Rensburg

Published in print: 2015-05-01

This book is an account of the theory and mathematical approaches in polymer entropy, with particular emphasis on mathematical approaches to directed and undirected lattice models. Results in the scaling and critical behaviour of models of directed and undirected models of self-avoiding walks, paths, polygons, animals and networks are presented. The general theory of tricritical scaling is reviewed in the context of models of lattice clusters, and the existence of a thermodynamic limit in these models is discussed in general and for particular models. Mathematical approaches based on subadditive and convex functions, generating function methods and percolation theory are used to analyse models of adsorbing, collapsing and pulled walks and polygons in the hypercubic and in the hexagonal lattice. These methods show the existence of thermodynamic limits, pattern theorems, phase diagrams and critical points and give results on topological properties such as knotting and writhing in models of lattice polygons. The use of generating function methods and scaling in directed models is comprehensively reviewed in relation to scaling and phase behaviour in models of directed paths and polygons, including Dyck paths and models of convex polygons. Monte Carlo methods for the self-avoiding walk are discussed, with particular emphasis on dynamic algorithms such as the pivot and BFACF algorithms, and on kinetic growth algorithms such as the Rosenbluth algorithms and its variants, including the PERM, GARM and GAS algorithms.

This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear ...
More

This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear algebra, optimization, forecasting, discrete dynamical systems, and more. Following on from the theoretical considerations, applications are given to data from commercially relevant interests: supermarket baskets; loyalty cards; mobile phone call records; smart meters; ‘omic‘ data; sales promotions; social media; and microblogging. Each chapter tackles a topic in analytics: social networks and digital marketing; forecasting; clustering and segmentation; inverse problems; Markov models of behavioural changes; multiple hypothesis testing and decision-making; and so on. Chapters start with background mathematical theory explained with a strong narrative and then give way to practical considerations and then to exemplar applications.Less

Mathematical Underpinnings of Analytics : Theory and Applications

Peter Grindrod

Published in print: 2014-11-27

This book presents analytics within a framework of mathematical theory and concepts, building upon firm theory and foundations of probability theory, graphs, and networks, random matrices, linear algebra, optimization, forecasting, discrete dynamical systems, and more. Following on from the theoretical considerations, applications are given to data from commercially relevant interests: supermarket baskets; loyalty cards; mobile phone call records; smart meters; ‘omic‘ data; sales promotions; social media; and microblogging. Each chapter tackles a topic in analytics: social networks and digital marketing; forecasting; clustering and segmentation; inverse problems; Markov models of behavioural changes; multiple hypothesis testing and decision-making; and so on. Chapters start with background mathematical theory explained with a strong narrative and then give way to practical considerations and then to exemplar applications.

At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.Less

Published in print: 2014-09-18

At the crossroads between statistics and machine learning, probabilistic graphical models provide a powerful formal framework to model complex data. Probabilistic graphical models are probabilistic models whose graphical components denote conditional independence structures between random variables. The probabilistic framework makes it possible to deal with data uncertainty while the conditional independence assumption helps process high dimensional and complex data. Examples of probabilistic graphical models are Bayesian networks and Markov random fields, which represent two of the most popular classes of such models. With the rapid advancements of high-throughput technologies and the ever decreasing costs of these next generation technologies, a fast-growing volume of biological data of various types—the so-called omics—is in need of accurate and efficient methods for modeling, prior to further downstream analysis. Network reconstruction from gene expression data represents perhaps the most emblematic area of research where probabilistic graphical models have been successfully applied. However these models have also created renew interest in genetics, in particular: association genetics, causality discovery, prediction of outcomes, detection of copy number variations, epigenetics, etc.. For all these reasons, it is foreseeable that such models will have a prominent role to play in advances in genome-wide analyses.

The subject of this book is the efficient solution of partial differential equations (PDEs) that arise when modelling incompressible fluid flow. The first part (Chapters 1 through 5) covers the ...
More

The subject of this book is the efficient solution of partial differential equations (PDEs) that arise when modelling incompressible fluid flow. The first part (Chapters 1 through 5) covers the Poisson equation and the Stokes equations. For each PDE, there is a chapter concerned with finite element discretization and a companion chapter concerned with efficient iterative solution of the algebraic equations obtained from discretization. Chapter 5 describes the basics of PDE-constrained optimization. The second part of the book (Chapters 6 to 11) is a more advanced introduction to the numerical analysis of incompressible flows. It starts with four chapters on the convection–diffusion equation and the steady Navier–Stokes equations, organized by equation with a chapter describing discretization coupled with a companion concerned with iterative solution algorithms. The book concludes with two chapters describing discretization and solution methods for models of unsteady flow and buoyancy-driven flow.Less

Howard ElmanDavid SilvesterAndy Wathen

Published in print: 2014-06-01

The subject of this book is the efficient solution of partial differential equations (PDEs) that arise when modelling incompressible fluid flow. The first part (Chapters 1 through 5) covers the Poisson equation and the Stokes equations. For each PDE, there is a chapter concerned with finite element discretization and a companion chapter concerned with efficient iterative solution of the algebraic equations obtained from discretization. Chapter 5 describes the basics of PDE-constrained optimization. The second part of the book (Chapters 6 to 11) is a more advanced introduction to the numerical analysis of incompressible flows. It starts with four chapters on the convection–diffusion equation and the steady Navier–Stokes equations, organized by equation with a chapter describing discretization coupled with a companion concerned with iterative solution algorithms. The book concludes with two chapters describing discretization and solution methods for models of unsteady flow and buoyancy-driven flow.

Starting with the construction of stochastic processes, the book introduces Brownian motion and martingales. After proving the Doob-Meyer decomposition, quadratic variation processes and local ...
More

Starting with the construction of stochastic processes, the book introduces Brownian motion and martingales. After proving the Doob-Meyer decomposition, quadratic variation processes and local martingales are discussed. The book proceeds to construct stochastic integrals, prove the Itô formula, derive several important applications of the formula such as the martingale representation theorem and the Burkhölder-Davis-Gundy inequality, and establish the Girsanov theorem on change of measures. Next, attention is focused on stochastic differential equations which arise in modeling physical phenomena, perturbed by random forces. Diffusion processes are solutions of stochastic differential equations and form the main theme of this book. After establishing the existence and uniqueness of strong solutions to stochastic differential equations, weak solutions and martingale problems posed by stochastic differential equations are studied in detail. The Stroock-Varadhan martingale problem is a powerful tool in solving stochastic differential equations and is discussed in a separate chapter. The connection between diffusion processes and partial differential equations is quite important and fruitful. Probabilistic representations of solutions of partial differential equations, and a derivation of the Kolmogorov forward and backward equations are provided. Gaussian solutions of stochastic differential equations, and Markov processes with jumps are presented in successive chapters. The final objective of the book consists in giving a careful treatment of the probabilistic behavior of diffusions such as existence and uniqueness of invariant measures, ergodic behavior, and large deviations principle in the presence of small noise.Less

Stochastic Analysis and Diffusion Processes

Gopinath KallianpurP Sundar

Published in print: 2014-01-16

Starting with the construction of stochastic processes, the book introduces Brownian motion and martingales. After proving the Doob-Meyer decomposition, quadratic variation processes and local martingales are discussed. The book proceeds to construct stochastic integrals, prove the Itô formula, derive several important applications of the formula such as the martingale representation theorem and the Burkhölder-Davis-Gundy inequality, and establish the Girsanov theorem on change of measures. Next, attention is focused on stochastic differential equations which arise in modeling physical phenomena, perturbed by random forces. Diffusion processes are solutions of stochastic differential equations and form the main theme of this book. After establishing the existence and uniqueness of strong solutions to stochastic differential equations, weak solutions and martingale problems posed by stochastic differential equations are studied in detail. The Stroock-Varadhan martingale problem is a powerful tool in solving stochastic differential equations and is discussed in a separate chapter. The connection between diffusion processes and partial differential equations is quite important and fruitful. Probabilistic representations of solutions of partial differential equations, and a derivation of the Kolmogorov forward and backward equations are provided. Gaussian solutions of stochastic differential equations, and Markov processes with jumps are presented in successive chapters. The final objective of the book consists in giving a careful treatment of the probabilistic behavior of diffusions such as existence and uniqueness of invariant measures, ergodic behavior, and large deviations principle in the presence of small noise.

This book is about simple first-order theories. The class of simple theories was introduced by S. Shelah in the early 1980s. Then several specific algebraic structures having simple theories have ...
More

This book is about simple first-order theories. The class of simple theories was introduced by S. Shelah in the early 1980s. Then several specific algebraic structures having simple theories have been studied by leading researchers, notably by E. Hrushovski. In the mid-1990s the author established in his thesis the symmetry and transitivity of non-forking for simple theories and, with A. Pillay, type-amalgamation for Lascar strong types. Since then a great deal of research work on simplicity theory, the study of simple theories and structures has been produced. This book starts with the introduction of the fundamental notions of dividing and forking, and covers up to the hyperdefinable group configuration theorem for simple theories.Less

Simplicity Theory

Byunghan Kim

Published in print: 2013-10-17

This book is about simple first-order theories. The class of simple theories was introduced by S. Shelah in the early 1980s. Then several specific algebraic structures having simple theories have been studied by leading researchers, notably by E. Hrushovski. In the mid-1990s the author established in his thesis the symmetry and transitivity of non-forking for simple theories and, with A. Pillay, type-amalgamation for Lascar strong types. Since then a great deal of research work on simplicity theory, the study of simple theories and structures has been produced. This book starts with the introduction of the fundamental notions of dividing and forking, and covers up to the hyperdefinable group configuration theorem for simple theories.

For eight centuries mathematics has been researched and studied at Oxford, and the subject and its teaching have undergone profound changes during that time. This is the story of the intellectual and ...
More

For eight centuries mathematics has been researched and studied at Oxford, and the subject and its teaching have undergone profound changes during that time. This is the story of the intellectual and social life of this community, and of its interactions with the wider world. This highly readable and beautifully illustrated book reveals the richness and influence of Oxford’s mathematical tradition and the fascinating characters that helped to shape it. The story begins with the founding of the University of Oxford and the establishing of the medieval curriculum, in which mathematics had an important role. The Black Death, the advent of printing, the Civil War, and the Newtonian revolution all had a great influence on the development of mathematics at Oxford. So too did many well-known figures: Roger Bacon, Henry Savile, Robert Hooke, Christopher Wren, Edmond Halley, Florence Nightingale, Charles Dodgson (Lewis Carroll), and G. H. Hardy, to name but a few. Later chapters bring us to the 20th century, with some entertaining reminiscences by Sir Michael Atiyah of the thirty years he spent as an Oxford mathematician. In this second edition the story is brought right up to the opening of the new Mathematical Institute in 2013 with a foreword from Marcus du Sautoy and recent developments from Peter M. Neumann.Less

Oxford Figures : Eight Centuries of the Mathematical Sciences

Published in print: 2013-09-19

For eight centuries mathematics has been researched and studied at Oxford, and the subject and its teaching have undergone profound changes during that time. This is the story of the intellectual and social life of this community, and of its interactions with the wider world. This highly readable and beautifully illustrated book reveals the richness and influence of Oxford’s mathematical tradition and the fascinating characters that helped to shape it. The story begins with the founding of the University of Oxford and the establishing of the medieval curriculum, in which mathematics had an important role. The Black Death, the advent of printing, the Civil War, and the Newtonian revolution all had a great influence on the development of mathematics at Oxford. So too did many well-known figures: Roger Bacon, Henry Savile, Robert Hooke, Christopher Wren, Edmond Halley, Florence Nightingale, Charles Dodgson (Lewis Carroll), and G. H. Hardy, to name but a few. Later chapters bring us to the 20th century, with some entertaining reminiscences by Sir Michael Atiyah of the thirty years he spent as an Oxford mathematician. In this second edition the story is brought right up to the opening of the new Mathematical Institute in 2013 with a foreword from Marcus du Sautoy and recent developments from Peter M. Neumann.

The history of mathematics is a well-studied and vibrant area of research, with books and scholarly articles published on various aspects of the subject. Yet, the history of combinatorics seems to ...
More

The history of mathematics is a well-studied and vibrant area of research, with books and scholarly articles published on various aspects of the subject. Yet, the history of combinatorics seems to have been largely overlooked. This book goes some way to redress this and serves two main purposes: it constitutes the first book-length survey of the history of combinatorics, and it assembles, for the first time in a single source, researches on the history of combinatorics that would otherwise be inaccessible to the general reader. Individual chapters have been contributed by sixteen experts. The book opens with an introduction to two thousand years of combinatorics. This is followed by seven chapters on early combinatorics, leading from Indian and Chinese writings on permutations to late-Renaissance publications on the arithmetical triangle. The next seven chapters trace the subsequent story, from Euler’s contributions to such wide-ranging topics as partitions, polyhedra, and latin squares to the 20th-century advances in combinatorial set theory, enumeration, and graph theory. The book concludes with some combinatorial reflections.Less

Combinatorics: Ancient and Modern

Published in print: 2013-06-27

The history of mathematics is a well-studied and vibrant area of research, with books and scholarly articles published on various aspects of the subject. Yet, the history of combinatorics seems to have been largely overlooked. This book goes some way to redress this and serves two main purposes: it constitutes the first book-length survey of the history of combinatorics, and it assembles, for the first time in a single source, researches on the history of combinatorics that would otherwise be inaccessible to the general reader. Individual chapters have been contributed by sixteen experts. The book opens with an introduction to two thousand years of combinatorics. This is followed by seven chapters on early combinatorics, leading from Indian and Chinese writings on permutations to late-Renaissance publications on the arithmetical triangle. The next seven chapters trace the subsequent story, from Euler’s contributions to such wide-ranging topics as partitions, polyhedra, and latin squares to the 20th-century advances in combinatorial set theory, enumeration, and graph theory. The book concludes with some combinatorial reflections.

The subject of the book is the topology and future stability of models of the universe. In standard cosmology, the universe is assumed to be spatially homogeneous and isotropic. However, it is of ...
More

The subject of the book is the topology and future stability of models of the universe. In standard cosmology, the universe is assumed to be spatially homogeneous and isotropic. However, it is of interest to know whether perturbations of the corresponding initial data lead to similar solutions or not. This is the question of stability. It is also of interest to know what the limitations on the global topology imposed by observational constraints are. These are the topics addressed in the book. The theory underlying the discussion is the general theory of relativity. Moreover, in the book, matter is modelled using kinetic theory. As background material, the general theory of the Cauchy problem for the Einstein–Vlasov equations is therefore developed.Less

On the Topology and Future Stability of the Universe

Hans Ringström

Published in print: 2013-05-23

The subject of the book is the topology and future stability of models of the universe. In standard cosmology, the universe is assumed to be spatially homogeneous and isotropic. However, it is of interest to know whether perturbations of the corresponding initial data lead to similar solutions or not. This is the question of stability. It is also of interest to know what the limitations on the global topology imposed by observational constraints are. These are the topics addressed in the book. The theory underlying the discussion is the general theory of relativity. Moreover, in the book, matter is modelled using kinetic theory. As background material, the general theory of the Cauchy problem for the Einstein–Vlasov equations is therefore developed.

This book examines computer aided assessment (CAA) of mathematics in which computer algebra systems (CAS) are used to automatically establish the mathematical properties of expressions provided by ...
More

This book examines computer aided assessment (CAA) of mathematics in which computer algebra systems (CAS) are used to automatically establish the mathematical properties of expressions provided by students in response to questions. In order to automate such assessment, the relevant criteria must be encoded. This is not so simple. Even articulating precisely the desired criteria forces the teacher to think very carefully indeed. Hence, CAA acts as a vehicle to examine assessment and mathematics education in detail and from a fresh perspective. For example, the constraints of the paper-based formats have affected what we do and why. It is natural for busy teachers to set only those questions which can be marked by hand in a straightforward way. However, there are other kinds of questions, e.g., those with non-unique correct answers, or where assessing the properties requires the marker themselves to undertake a significant computation. It is simply not sensible for a person to set these to large groups of students when marking by hand. And yet such questions have their place and value in provoking thought and learning. Furthermore, we explain how, in certain cases, these can be automatically assessed. Case studies of existing systems will illustrate this in a concrete and practical way.Less

Computer Aided Assessment of Mathematics

Chris Sangwin

Published in print: 2013-05-02

This book examines computer aided assessment (CAA) of mathematics in which computer algebra systems (CAS) are used to automatically establish the mathematical properties of expressions provided by students in response to questions. In order to automate such assessment, the relevant criteria must be encoded. This is not so simple. Even articulating precisely the desired criteria forces the teacher to think very carefully indeed. Hence, CAA acts as a vehicle to examine assessment and mathematics education in detail and from a fresh perspective. For example, the constraints of the paper-based formats have affected what we do and why. It is natural for busy teachers to set only those questions which can be marked by hand in a straightforward way. However, there are other kinds of questions, e.g., those with non-unique correct answers, or where assessing the properties requires the marker themselves to undertake a significant computation. It is simply not sensible for a person to set these to large groups of students when marking by hand. And yet such questions have their place and value in provoking thought and learning. Furthermore, we explain how, in certain cases, these can be automatically assessed. Case studies of existing systems will illustrate this in a concrete and practical way.

Self-adaptive discretization methods nowadays are an indispensable tool for the numerical solution of partial differential equations that arise from physical and technical applications. The aim is to ...
More

Self-adaptive discretization methods nowadays are an indispensable tool for the numerical solution of partial differential equations that arise from physical and technical applications. The aim is to obtain a numerical solution within a prescribed tolerance using a minimal amount of work. The main tools in achieving this goal are a posteriori error estimates which give global and local information on the error of the numerical solution and which can easily be computed from the given numerical solution and the data of the differential equation. In this monograph we review the most frequently used a posteriori error estimation techniques and apply them to a broad class of linear and nonlinear elliptic and parabolic equations. Although there are various approaches to adaptivity and a posteriori error estimation, they are all based on a few common principles. Our main goal is to elaborate these basic principles and to give guidelines for developing adaptive schemes for new problems. Chapters 1 and 2 are quite elementary and present various error indicators and their use for mesh adaptation in the framework of a simple model problem. The intention here is to present the basic principles using a minimal amount of notation and techniques. Chapters 4–6, on the other hand, are more advanced and present a posteriori error estimates within a general framework using the technical tools collected in Chapter 3. Most sections close with a bibliographical remark which indicates the historical development and hints at further results.Less

A Posteriori Error Estimation Techniques for Finite Element Methods

Rüdiger Verfürth

Published in print: 2013-04-25

Self-adaptive discretization methods nowadays are an indispensable tool for the numerical solution of partial differential equations that arise from physical and technical applications. The aim is to obtain a numerical solution within a prescribed tolerance using a minimal amount of work. The main tools in achieving this goal are a posteriori error estimates which give global and local information on the error of the numerical solution and which can easily be computed from the given numerical solution and the data of the differential equation. In this monograph we review the most frequently used a posteriori error estimation techniques and apply them to a broad class of linear and nonlinear elliptic and parabolic equations. Although there are various approaches to adaptivity and a posteriori error estimation, they are all based on a few common principles. Our main goal is to elaborate these basic principles and to give guidelines for developing adaptive schemes for new problems. Chapters 1 and 2 are quite elementary and present various error indicators and their use for mesh adaptation in the framework of a simple model problem. The intention here is to present the basic principles using a minimal amount of notation and techniques. Chapters 4–6, on the other hand, are more advanced and present a posteriori error estimates within a general framework using the technical tools collected in Chapter 3. Most sections close with a bibliographical remark which indicates the historical development and hints at further results.

Quantum mechanics and linguistics appear to be quite unrelated at first sight. Yet significant parts of both concern compositional reasoning about the way information flows among subsystems and the ...
More

Quantum mechanics and linguistics appear to be quite unrelated at first sight. Yet significant parts of both concern compositional reasoning about the way information flows among subsystems and the manner in which this flow gives rise to the properties of a system as a whole. This book is about the mathematics underlying this notion of compositionality, how it gives rise to intuitive diagrammatic calculi, and how these compositional methods are applied to reason about phenomena of both disciplines.Over the past decade, theoretical physics and quantum information theory have turned to category theory to model and reason about quantum protocols. This new use of categorical and algebraic tools allows a more conceptual and insightful expression of elementary events, such as measurements, teleportation, and entanglement operations, that were obscured in previous formalisms.Recent work in natural language semantics has begun to use these categorical methods to relate grammatical analysis and semantic representations in a unified framework for analyzing language meaning and learning meaning from a corpus. A growing body of literature on the use of categorical methods in quantum information theory and computational linguistics shows both the need and opportunity for new research on the relation between these categorical methods and the abstract notion of information flow.The aim of this book is to supply an overview of how categorical methods are used to model information flow in both physics and linguistics, to serve as an introduction to this interdisciplinary research, and to provide a basis for future research and collaboration between the different communities interested in applying category-theoretic methods to their domains’ open problems.Less

Published in print: 2013-02-21

Quantum mechanics and linguistics appear to be quite unrelated at first sight. Yet significant parts of both concern compositional reasoning about the way information flows among subsystems and the manner in which this flow gives rise to the properties of a system as a whole. This book is about the mathematics underlying this notion of compositionality, how it gives rise to intuitive diagrammatic calculi, and how these compositional methods are applied to reason about phenomena of both disciplines.Over the past decade, theoretical physics and quantum information theory have turned to category theory to model and reason about quantum protocols. This new use of categorical and algebraic tools allows a more conceptual and insightful expression of elementary events, such as measurements, teleportation, and entanglement operations, that were obscured in previous formalisms.Recent work in natural language semantics has begun to use these categorical methods to relate grammatical analysis and semantic representations in a unified framework for analyzing language meaning and learning meaning from a corpus. A growing body of literature on the use of categorical methods in quantum information theory and computational linguistics shows both the need and opportunity for new research on the relation between these categorical methods and the abstract notion of information flow.The aim of this book is to supply an overview of how categorical methods are used to model information flow in both physics and linguistics, to serve as an introduction to this interdisciplinary research, and to provide a basis for future research and collaboration between the different communities interested in applying category-theoretic methods to their domains’ open problems.

This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many ...
More

This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many independent random variables does not depend too much on any of them then it is concentrated around its expected value. This book offers a host of inequalities to quantify this statement. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities, transportation arguments, to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are presented. The book offers a self-contained introduction to concentration inequalities, including a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed. Special attention is paid to applications to the supremum of empirical processes.Less

Concentration Inequalities : A Nonasymptotic Theory of Independence

Stéphane BoucheronGábor LugosiPascal Massart

Published in print: 2013-02-07

This monograph presents a mathematical theory of concentration inequalities for functions of independent random variables. The basic phenomenon under investigation is that if a function of many independent random variables does not depend too much on any of them then it is concentrated around its expected value. This book offers a host of inequalities to quantify this statement. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities, transportation arguments, to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are presented. The book offers a self-contained introduction to concentration inequalities, including a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed. Special attention is paid to applications to the supremum of empirical processes.

The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances ...
More

The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.Less

Bayesian Theory and Applications

Published in print: 2013-01-24

The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This book travels on a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book honours the contributions of Sir Adrian F. M. Smith, one of the seminal Bayesian researchers, with his work on hierarchical models, sequential Monte Carlo, and Markov chain Monte Carlo and his mentoring of numerous graduate students.

This book offers a detailed treatment of the mathematical theory of Krylov subspace methods with focus on solving systems of linear algebraic equations. Starting from the idea of projections, Krylov ...
More

This book offers a detailed treatment of the mathematical theory of Krylov subspace methods with focus on solving systems of linear algebraic equations. Starting from the idea of projections, Krylov subspace methods are characterised by their orthogonality and minimisation properties. Projections onto highly nonlinear Krylov subspaces can be linked with the underlying problem of moments, and therefore Krylov subspace methods can be viewed as matching moments model reduction. This allows enlightening reformulations of questions from matrix computations into the language of orthogonal polynomials, Gauss–Christoffel quadrature, continued fractions, and, more generally, of Vorobyev method of moments. Using the concept of cyclic invariant subspaces conditions are studied that allow generation of orthogonal Krylov subspace bases via short recurrences. The results motivate the practically important distinction between Hermitian and non-Hermitian problems. Finally, the book thoroughly addresses the computational cost while using Krylov subspace methods. The investigation includes effects of finite precision arithmetic and focuses on the method of conjugate gradients (CG) and generalised minimal residuals (GMRES) as major examples. The book emphasises that algebraic computations must always be considered in the context of solving real-world problems, where the mathematical modelling, discretisation, and computation cannot be separated from each other. Moreover, the book underlines the importance of the historical context and it demonstrates that knowledge of early developments can play an important role in understanding and resolving very recent computational problems. Many extensive historical notes are therefore included as an inherent part of the text. The book ends with formulating some omitted issues and challenges which need to be addressed in future work. The book is intended as a research monograph which can be used in a wide scope of graduate courses on related subjects. It can be beneficial also for readers interested in the history of mathematics.Less

Krylov Subspace Methods : Principles and Analysis

Jörg LiesenZdenek Strakos

Published in print: 2012-10-18

This book offers a detailed treatment of the mathematical theory of Krylov subspace methods with focus on solving systems of linear algebraic equations. Starting from the idea of projections, Krylov subspace methods are characterised by their orthogonality and minimisation properties. Projections onto highly nonlinear Krylov subspaces can be linked with the underlying problem of moments, and therefore Krylov subspace methods can be viewed as matching moments model reduction. This allows enlightening reformulations of questions from matrix computations into the language of orthogonal polynomials, Gauss–Christoffel quadrature, continued fractions, and, more generally, of Vorobyev method of moments. Using the concept of cyclic invariant subspaces conditions are studied that allow generation of orthogonal Krylov subspace bases via short recurrences. The results motivate the practically important distinction between Hermitian and non-Hermitian problems. Finally, the book thoroughly addresses the computational cost while using Krylov subspace methods. The investigation includes effects of finite precision arithmetic and focuses on the method of conjugate gradients (CG) and generalised minimal residuals (GMRES) as major examples. The book emphasises that algebraic computations must always be considered in the context of solving real-world problems, where the mathematical modelling, discretisation, and computation cannot be separated from each other. Moreover, the book underlines the importance of the historical context and it demonstrates that knowledge of early developments can play an important role in understanding and resolving very recent computational problems. Many extensive historical notes are therefore included as an inherent part of the text. The book ends with formulating some omitted issues and challenges which need to be addressed in future work. The book is intended as a research monograph which can be used in a wide scope of graduate courses on related subjects. It can be beneficial also for readers interested in the history of mathematics.

This book provides an up-to-date account of the literature on the subject of determining the structure of rings over which cyclic modules or proper cyclic modules have a finiteness condition or a ...
More

This book provides an up-to-date account of the literature on the subject of determining the structure of rings over which cyclic modules or proper cyclic modules have a finiteness condition or a homological property. The finiteness conditions and homological properties are closely interrelated in the sense that either hypothesis induces the other in some form. The main objective behind writing this volume is the absence of a book that contains most of the relevant material on the subject. Since before the last half century, numerous authors including Armendariz, Beidar, Camillo, Chatters, Clark, Cohen, Cozzens, Faith, Farkas, Fisher, Goodearl, Gómez Pardo, Guil Asensio, Hajarnavis, Huynh, Jain, Kohler, Levy, López-Permouth, Mohamed, Ornstein, Osofsky, Singh, Skornyakov, Smith, Tuganbaev, and Wisbauer have investigated rings whose factor rings or factor modules have a finiteness condition or a homological property. They made important contributions leading to new directions and questions that have been listed at the end of each chapter for the benefit of future researchers. The bibliography has more than 200 references and is not claimed to be exhaustive.Less

Cyclic Modules and the Structure of Rings

S.K. JainAshish K. SrivastavaAskar A. Tuganbaev

Published in print: 2012-09-27

This book provides an up-to-date account of the literature on the subject of determining the structure of rings over which cyclic modules or proper cyclic modules have a finiteness condition or a homological property. The finiteness conditions and homological properties are closely interrelated in the sense that either hypothesis induces the other in some form. The main objective behind writing this volume is the absence of a book that contains most of the relevant material on the subject. Since before the last half century, numerous authors including Armendariz, Beidar, Camillo, Chatters, Clark, Cohen, Cozzens, Faith, Farkas, Fisher, Goodearl, Gómez Pardo, Guil Asensio, Hajarnavis, Huynh, Jain, Kohler, Levy, López-Permouth, Mohamed, Ornstein, Osofsky, Singh, Skornyakov, Smith, Tuganbaev, and Wisbauer have investigated rings whose factor rings or factor modules have a finiteness condition or a homological property. They made important contributions leading to new directions and questions that have been listed at the end of each chapter for the benefit of future researchers. The bibliography has more than 200 references and is not claimed to be exhaustive.

The idea of this book is to illustrate an interplay between distinct domains of mathematics. Firstly, this book provides an introduction to hyperbolic geometry, based on the Lorentz group PSO(1, d) ...
More

The idea of this book is to illustrate an interplay between distinct domains of mathematics. Firstly, this book provides an introduction to hyperbolic geometry, based on the Lorentz group PSO(1, d) and its Iwasawa decomposition, commutation relations and Haar measure, and on the hyperbolic Laplacian. The Lorentz group plays a role in relativistic space–time analogous to rotations in Euclidean space. Hyperbolic geometry is the geometry of the unit pseudo-sphere. The boundary of hyperbolic space is defined as the set of light rays. Special attention is given to the geodesic and horocyclic flows. This book presents hyperbolic geometry via special relativity to benefit from physical intuition. Secondly, this book introduces some basic notions of stochastic analysis: the Wiener process, Itô's stochastic integral and Itô calculus. The book studies linear stochastic differential equations on groups of matrices, and diffusion processes on homogeneous spaces. Spherical and hyperbolic Brownian motions, diffusions on stable leaves, and relativistic diffusion are constructed. Thirdly, quotients of hyperbolic space under a discrete group of isometries are introduced, and form the framework in which some elements of hyperbolic dynamics are presented, especially the ergodicity of the geodesic and horocyclic flows. An analysis is given of the chaotic behaviour of the geodesic flow, using stochastic analysis methods. The main result is Sinai's central limit theorem. Some related results (including a construction of the Wiener measure) which complete the expositions of hyperbolic geometry and stochastic calculus are given in the appendices.Less

Hyperbolic Dynamics and Brownian Motion : An Introduction

Jacques FranchiYves Le Jan

Published in print: 2012-08-16

The idea of this book is to illustrate an interplay between distinct domains of mathematics. Firstly, this book provides an introduction to hyperbolic geometry, based on the Lorentz group PSO(1, d) and its Iwasawa decomposition, commutation relations and Haar measure, and on the hyperbolic Laplacian. The Lorentz group plays a role in relativistic space–time analogous to rotations in Euclidean space. Hyperbolic geometry is the geometry of the unit pseudo-sphere. The boundary of hyperbolic space is defined as the set of light rays. Special attention is given to the geodesic and horocyclic flows. This book presents hyperbolic geometry via special relativity to benefit from physical intuition. Secondly, this book introduces some basic notions of stochastic analysis: the Wiener process, Itô's stochastic integral and Itô calculus. The book studies linear stochastic differential equations on groups of matrices, and diffusion processes on homogeneous spaces. Spherical and hyperbolic Brownian motions, diffusions on stable leaves, and relativistic diffusion are constructed. Thirdly, quotients of hyperbolic space under a discrete group of isometries are introduced, and form the framework in which some elements of hyperbolic dynamics are presented, especially the ergodicity of the geodesic and horocyclic flows. An analysis is given of the chaotic behaviour of the geodesic flow, using stochastic analysis methods. The main result is Sinai's central limit theorem. Some related results (including a construction of the Wiener measure) which complete the expositions of hyperbolic geometry and stochastic calculus are given in the appendices.

This book presents a comprehensive treatment of the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as ...
More

This book presents a comprehensive treatment of the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as being made up of distinct components such as trend, seasonal, regression elements and disturbance elements, each of which is modelled separately. The techniques that emerge from this approach are very flexible. Part I presents a full treatment of the construction and analysis of linear Gaussian state space models. The methods are based on the Kalman filter and are appropriate for a wide range of problems in practical time series analysis. The analysis can be carried out from both classical and Bayesian perspectives. Part I then presents illustrations to real series and exercises are provided for a selection of chapters. Part II discusses approximate and exact approaches for handling broad classes of non-Gaussian and nonlinear state space models. Approximate methods include the extended Kalman filter and the more recently developed unscented Kalman filter. The book shows that exact treatments become feasible when simulation-based methods such as importance sampling and particle filtering are adopted. Bayesian treatments based on simulation methods are also explored.Less

Time Series Analysis by State Space Methods : Second Edition

James DurbinSiem Jan Koopman

Published in print: 2012-05-03

This book presents a comprehensive treatment of the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as being made up of distinct components such as trend, seasonal, regression elements and disturbance elements, each of which is modelled separately. The techniques that emerge from this approach are very flexible. Part I presents a full treatment of the construction and analysis of linear Gaussian state space models. The methods are based on the Kalman filter and are appropriate for a wide range of problems in practical time series analysis. The analysis can be carried out from both classical and Bayesian perspectives. Part I then presents illustrations to real series and exercises are provided for a selection of chapters. Part II discusses approximate and exact approaches for handling broad classes of non-Gaussian and nonlinear state space models. Approximate methods include the extended Kalman filter and the more recently developed unscented Kalman filter. The book shows that exact treatments become feasible when simulation-based methods such as importance sampling and particle filtering are adopted. Bayesian treatments based on simulation methods are also explored.

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents an introduction to the role that cryptography plays in providing information ...
More

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents an introduction to the role that cryptography plays in providing information security for technologies such as the Internet, mobile phones, payment cards, and wireless local area networks. Focusing on the fundamental principles that ground modern cryptography as they arise in modern applications, it avoids both an over-reliance on transient current technologies and over-whelming theoretical research. A short appendix is included for those looking for a deeper appreciation of some of the concepts involved. By the end of this book, the reader will not only be able to understand the practical issues concerned with the deployment of cryptographic mechanisms, including the management of cryptographic keys, but will also be able to interpret future developments in this increasingly important area of technology.Less

Everyday Cryptography : Fundamental Principles and Applications

Keith M. Martin

Published in print: 2012-03-01

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents an introduction to the role that cryptography plays in providing information security for technologies such as the Internet, mobile phones, payment cards, and wireless local area networks. Focusing on the fundamental principles that ground modern cryptography as they arise in modern applications, it avoids both an over-reliance on transient current technologies and over-whelming theoretical research. A short appendix is included for those looking for a deeper appreciation of some of the concepts involved. By the end of this book, the reader will not only be able to understand the practical issues concerned with the deployment of cryptographic mechanisms, including the management of cryptographic keys, but will also be able to interpret future developments in this increasingly important area of technology.

This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey ...
More

This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.Less

An Introduction to Model-Based Survey Sampling with Applications

Ray ChambersRobert Clark

Published in print: 2012-01-19

This book is an introduction to the model-based approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the model-based approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like non-parametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how model-based methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of non-linear population quantities, sub-sampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to model-based survey design and estimation.

This book explores how the mathematics the Jesuits brought to China was reconstructed as a branch of imperial learning so that the emperor Kangxi (r. 1662–1722) could consolidate his power over the ...
More

This book explores how the mathematics the Jesuits brought to China was reconstructed as a branch of imperial learning so that the emperor Kangxi (r. 1662–1722) could consolidate his power over the most populous empire in the world. Kangxi forced a return to the use of what became known as ‘Western’ methods in official astronomy. In his middle life he studied astronomy, musical theory, and mathematics in person, with Jesuits as his teachers. In his last years he sponsored a book that was intended to compile these three disciplines, and he set several of his sons to work on this project. All this activity formed a vital part of his plan for establishing Manchu authority over the Chinese. This book sets out to explain how and why Kangxi made the sciences a tool for laying the foundations of empire, and to show how, as part of this process, mathematics was reconstructed as a branch of imperial learning.Less

The Emperor's New Mathematics : Western Learning and Imperial Authority During the Kangxi Reign (1662-1722)

Catherine Jami

Published in print: 2011-12-01

This book explores how the mathematics the Jesuits brought to China was reconstructed as a branch of imperial learning so that the emperor Kangxi (r. 1662–1722) could consolidate his power over the most populous empire in the world. Kangxi forced a return to the use of what became known as ‘Western’ methods in official astronomy. In his middle life he studied astronomy, musical theory, and mathematics in person, with Jesuits as his teachers. In his last years he sponsored a book that was intended to compile these three disciplines, and he set several of his sons to work on this project. All this activity formed a vital part of his plan for establishing Manchu authority over the Chinese. This book sets out to explain how and why Kangxi made the sciences a tool for laying the foundations of empire, and to show how, as part of this process, mathematics was reconstructed as a branch of imperial learning.

Bundles, connections, metrics, and curvature are the ‘lingua franca’ of modern differential geometry and theoretical physics. Many of the tools used in differential topology are introduced and the ...
More

Bundles, connections, metrics, and curvature are the ‘lingua franca’ of modern differential geometry and theoretical physics. Many of the tools used in differential topology are introduced and the basic results about differentiable manifolds, smooth maps, differential forms, vector fields, Lie groups, and Grassmanians are all presented here. Other material covered includes the basic theorems about geodesics and Jacobi fields, the classification theorem for flat connections, the definition of characteristic classes, and also an introduction to complex and Kähler geometry. The book uses many of the classical examples from, and applications of, the subjects it covers, in particular those where closed form expressions are available, to bring abstract ideas to life.Less

Differential Geometry : Bundles, Connections, Metrics and Curvature

Clifford Henry Taubes

Published in print: 2011-10-13

Bundles, connections, metrics, and curvature are the ‘lingua franca’ of modern differential geometry and theoretical physics. Many of the tools used in differential topology are introduced and the basic results about differentiable manifolds, smooth maps, differential forms, vector fields, Lie groups, and Grassmanians are all presented here. Other material covered includes the basic theorems about geodesics and Jacobi fields, the classification theorem for flat connections, the definition of characteristic classes, and also an introduction to complex and Kähler geometry. The book uses many of the classical examples from, and applications of, the subjects it covers, in particular those where closed form expressions are available, to bring abstract ideas to life.

The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in ...
More

The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.Less

Bayesian Statistics 9

Published in print: 2011-10-06

The Valencia International Meetings on Bayesian Statistics – established in 1979 and held every four years – have been the forum for a definitive overview of current concerns and activities in Bayesian statistics. These are the edited Proceedings of the Ninth meeting, and contain the invited papers each followed by their discussion and a rejoinder by the author(s). In the tradition of the earlier editions, this encompasses an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia 9 invited papers cover a broad range of topics, including foundational and core theoretical issues in statistics, the continued development of new and refined computational methods for complex Bayesian modelling, substantive applications of flexible Bayesian modelling, and new developments in the theory and methodology of graphical modelling. They also describe advances in methodology for specific applied fields, including financial econometrics and portfolio decision making, public policy applications for drug surveillance, studies in the physical and environmental sciences, astronomy and astrophysics, climate change studies, molecular biosciences, statistical genetics or stochastic dynamic networks in systems biology.

An antidote to technique-oriented service courses, this book studiously avoids the recipe-book style and keeps algebraic details of specific statistical methods to the minimum extent necessary to ...
More

An antidote to technique-oriented service courses, this book studiously avoids the recipe-book style and keeps algebraic details of specific statistical methods to the minimum extent necessary to understand the underlying concepts. Instead, it aims to give the reader a clear understanding of how core statistical ideas of experimental design, modelling, and data analysis are integral to the scientific method. Aimed primarily towards a range of scientific disciplines (albeit with a bias towards the biological, environmental, and health sciences), this book assumes some maturity of understanding of scientific method, but does not require any prior knowledge of statistics, or any mathematical knowledge beyond basic algebra and a willingness to come to terms with mathematical notation. Any statistical analysis of a realistically sized data-set requires the use of specially written computer software. An Appendix introduces the reader to our open-source software of choice. All of the material in the book can be understood without using either R or any other computer software.Less

Statistics and Scientific Method : An Introduction for Students and Researchers

Peter J. DiggleAmanda G. Chetwynd

Published in print: 2011-08-11

An antidote to technique-oriented service courses, this book studiously avoids the recipe-book style and keeps algebraic details of specific statistical methods to the minimum extent necessary to understand the underlying concepts. Instead, it aims to give the reader a clear understanding of how core statistical ideas of experimental design, modelling, and data analysis are integral to the scientific method. Aimed primarily towards a range of scientific disciplines (albeit with a bias towards the biological, environmental, and health sciences), this book assumes some maturity of understanding of scientific method, but does not require any prior knowledge of statistics, or any mathematical knowledge beyond basic algebra and a willingness to come to terms with mathematical notation. Any statistical analysis of a realistically sized data-set requires the use of specially written computer software. An Appendix introduces the reader to our open-source software of choice. All of the material in the book can be understood without using either R or any other computer software.

Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) ...
More

Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data, and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine, and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes.Less

Bayesian Smoothing and Regression for Longitudinal, Spatial and Event History Data

Ludwig FahrmeirThomas Kneib

Published in print: 2011-04-28

Several recent advances in smoothing and semiparametric regression are presented in this book from a unifying, Bayesian perspective. Simulation-based full Bayesian Markov chain Monte Carlo (MCMC) inference, as well as empirical Bayes procedures closely related to penalized likelihood estimation and mixed models, are considered here. Throughout, the focus is on semiparametric regression and smoothing based on basis expansions of unknown functions and effects in combination with smoothness priors for the basis coefficients. Beginning with a review of basic methods for smoothing and mixed models, longitudinal data, spatial data, and event history data are treated in separate chapters. Worked examples from various fields such as forestry, development economics, medicine, and marketing are used to illustrate the statistical methods covered in this book. Most of these examples have been analysed using implementations in the Bayesian software, BayesX, and some with R Codes.

The theory of Riemann surfaces occupies a very special place in mathematics. It is a culmination of much of traditional calculus, making surprising connections with geometry and arithmetic. It is an ...
More

The theory of Riemann surfaces occupies a very special place in mathematics. It is a culmination of much of traditional calculus, making surprising connections with geometry and arithmetic. It is an extremely useful part of mathematics, knowledge of which is needed by specialists in many other fields. It provides a model for a large number of more recent developments in areas including manifold topology, global analysis, algebraic geometry, Riemannian geometry, and diverse topics in mathematical physics. This text on Riemann surface theory proves the fundamental analytical results on the existence of meromorphic functions and the Uniformisation Theorem. The approach taken emphasises PDE methods, applicable more generally in global analysis. The connection with geometric topology, and in particular the role of the mapping class group, is also explained. To this end, some more sophisticated topics have been included, compared with traditional texts at this level. While the treatment is novel, the roots of the subject in traditional calculus and complex analysis are kept well in mind. Part I sets up the interplay between complex analysis and topology, with the latter treated informally. Part II works as a rapid first course in Riemann surface theory, including elliptic curves. The core of the book is contained in Part III, where the fundamental analytical results are proved.Less

Riemann Surfaces

Simon Donaldson

Published in print: 2011-03-24

The theory of Riemann surfaces occupies a very special place in mathematics. It is a culmination of much of traditional calculus, making surprising connections with geometry and arithmetic. It is an extremely useful part of mathematics, knowledge of which is needed by specialists in many other fields. It provides a model for a large number of more recent developments in areas including manifold topology, global analysis, algebraic geometry, Riemannian geometry, and diverse topics in mathematical physics. This text on Riemann surface theory proves the fundamental analytical results on the existence of meromorphic functions and the Uniformisation Theorem. The approach taken emphasises PDE methods, applicable more generally in global analysis. The connection with geometric topology, and in particular the role of the mapping class group, is also explained. To this end, some more sophisticated topics have been included, compared with traditional texts at this level. While the treatment is novel, the roots of the subject in traditional calculus and complex analysis are kept well in mind. Part I sets up the interplay between complex analysis and topology, with the latter treated informally. Part II works as a rapid first course in Riemann surface theory, including elliptic curves. The core of the book is contained in Part III, where the fundamental analytical results are proved.

There is a need for integrated thinking about causality, probability, and mechanism in scientific methodology. A panoply of disciplines, ranging from epidemiology and biology through to econometrics ...
More

There is a need for integrated thinking about causality, probability, and mechanism in scientific methodology. A panoply of disciplines, ranging from epidemiology and biology through to econometrics and physics, routinely make use of these concepts to infer causal relationships. But each of these disciplines has developed its own methods, where causality and probability often seem to have different understandings, and where the mechanisms involved often look very different. This variegated situation raises the question of whether progress in understanding the tools of causal inference in some sciences can lead to progress in other sciences, or whether the sciences are really using different concepts. Causality and probability are long-established central concepts in the sciences, with a corresponding philosophical literature examining their problems. The philosophical literature examining the concept of mechanism, on the other hand, is more recent and there has been no clear account of how mechanisms relate to causality and probability. If we are to understand causal inference in the sciences, we need to develop some account of the relationship between causality, probability, and mechanism. This book represents a joint project by philosophers and scientists to tackle this question, and related issues, as they arise in a wide variety of disciplines across the sciences.Less

Causality in the Sciences

Published in print: 2011-03-17

There is a need for integrated thinking about causality, probability, and mechanism in scientific methodology. A panoply of disciplines, ranging from epidemiology and biology through to econometrics and physics, routinely make use of these concepts to infer causal relationships. But each of these disciplines has developed its own methods, where causality and probability often seem to have different understandings, and where the mechanisms involved often look very different. This variegated situation raises the question of whether progress in understanding the tools of causal inference in some sciences can lead to progress in other sciences, or whether the sciences are really using different concepts. Causality and probability are long-established central concepts in the sciences, with a corresponding philosophical literature examining their problems. The philosophical literature examining the concept of mechanism, on the other hand, is more recent and there has been no clear account of how mechanisms relate to causality and probability. If we are to understand causal inference in the sciences, we need to develop some account of the relationship between causality, probability, and mechanism. This book represents a joint project by philosophers and scientists to tackle this question, and related issues, as they arise in a wide variety of disciplines across the sciences.

The vast majority of random processes in the real world have no memory — the next step in their development depends purely on their current state. Stochastic realizations are therefore defined purely ...
More

The vast majority of random processes in the real world have no memory — the next step in their development depends purely on their current state. Stochastic realizations are therefore defined purely in terms of successive event-time pairs, and such systems are easy to simulate irrespective of their degree of complexity. However, whilst the associated probability equations are straightforward to write down, their solution usually requires the use of approximation and perturbation procedures. Traditional books, heavy in mathematical theory, often ignore such methods and attempt to force problems into a rigid framework of closed-form solutions.Less

Eric Renshaw

Published in print: 2011-02-24

The vast majority of random processes in the real world have no memory — the next step in their development depends purely on their current state. Stochastic realizations are therefore defined purely in terms of successive event-time pairs, and such systems are easy to simulate irrespective of their degree of complexity. However, whilst the associated probability equations are straightforward to write down, their solution usually requires the use of approximation and perturbation procedures. Traditional books, heavy in mathematical theory, often ignore such methods and attempt to force problems into a rigid framework of closed-form solutions.

Seventy-five years of the study of matroids has seen the development of a rich theory with links to graphs, lattices, codes, transversals,0020and projective geometries. Matroids are of fundamental ...
More

Seventy-five years of the study of matroids has seen the development of a rich theory with links to graphs, lattices, codes, transversals,0020and projective geometries. Matroids are of fundamental importance in combinatorial optimization and their applications extend into electrical and structural engineering. This book falls into two parts: the first provides a comprehensive introduction to the basics of matroid theory, while the second treats more advanced topics. It contains over 700 exercises, and includes proofs of all of the major theorems in the subject. The last two chapters review current research and list more than eighty unsolved problems along with a description of the progress towards their solutions.Less

Matroid Theory

James Oxley

Published in print: 2011-02-17

Seventy-five years of the study of matroids has seen the development of a rich theory with links to graphs, lattices, codes, transversals,0020and projective geometries. Matroids are of fundamental importance in combinatorial optimization and their applications extend into electrical and structural engineering. This book falls into two parts: the first provides a comprehensive introduction to the basics of matroid theory, while the second treats more advanced topics. It contains over 700 exercises, and includes proofs of all of the major theorems in the subject. The last two chapters review current research and list more than eighty unsolved problems along with a description of the progress towards their solutions.

This text provides a broad account of the theory of traces and determinants on geometric algebras of differential and pseudodifferential operators over compact manifolds. Trace and determinant ...
More

This text provides a broad account of the theory of traces and determinants on geometric algebras of differential and pseudodifferential operators over compact manifolds. Trace and determinant functionals on geometric operator algebras provide a means of constructing refined invariants in analysis, topology, differential geometry, analytic number theory and QFT. The consequent interactions around such invariants have led to significant advances both in pure mathematics and theoretical physics. As the fundamental tools of trace theory have become well understood and clear general structures have emerged, so the need for specialist texts which explain the basic theoretical principles and the computational techniques has become increasingly exigent. This text is the first to deal with the general theory of traces and determinants of operators on manifolds in a broad context, encompassing a number of the principle applications and backed up by specific computations which set out in detail to newcomers the nuts-and-bolts of the basic theory. Both the microanalytic approach to traces and determinants via pseudodifferential operator theory and the more computational approach directed by applications in geometric analysis, are developed in a general framework that will be of interest to mathematicians and physicists in a number of different fields.Less

Traces and Determinants of Pseudodifferential Operators

Simon Scott

Published in print: 2010-09-09

This text provides a broad account of the theory of traces and determinants on geometric algebras of differential and pseudodifferential operators over compact manifolds. Trace and determinant functionals on geometric operator algebras provide a means of constructing refined invariants in analysis, topology, differential geometry, analytic number theory and QFT. The consequent interactions around such invariants have led to significant advances both in pure mathematics and theoretical physics. As the fundamental tools of trace theory have become well understood and clear general structures have emerged, so the need for specialist texts which explain the basic theoretical principles and the computational techniques has become increasingly exigent. This text is the first to deal with the general theory of traces and determinants of operators on manifolds in a broad context, encompassing a number of the principle applications and backed up by specific computations which set out in detail to newcomers the nuts-and-bolts of the basic theory. Both the microanalytic approach to traces and determinants via pseudodifferential operator theory and the more computational approach directed by applications in geometric analysis, are developed in a general framework that will be of interest to mathematicians and physicists in a number of different fields.

Few people have proved more influential in the field of differential and algebraic geometry, and in showing how this links with mathematical physics, than Nigel Hitchin. Oxford University's Savilian ...
More

Few people have proved more influential in the field of differential and algebraic geometry, and in showing how this links with mathematical physics, than Nigel Hitchin. Oxford University's Savilian Professor of Geometry has made fundamental contributions in areas as diverse as: spin geometry, instanton and monopole equations, twistor theory, symplectic geometry of moduli spaces, integrables systems, Higgs bundles, Einstein metrics, hyperkähler geometry, Frobenius manifolds, Painlevé equations, special Lagrangian geometry and mirror symmetry, theory of grebes, and many more. He was previously Rouse Ball Professor of Mathematics at Cambridge University, as well as Professor of Mathematics at the University of Warwick, is a Fellow of the Royal Society and has been the President of the London Mathematical Society. The chapters in this book, written by some of the greats in their fields (including four Fields Medalists), show how Hitchin's ideas have impacted on a wide variety of subjects. The book grew out of the Geometry Conference in Honour of Nigel Hitchin, held in Madrid.Less

The Many Facets of Geometry : A Tribute to Nigel Hitchin

Published in print: 2010-07-01

Few people have proved more influential in the field of differential and algebraic geometry, and in showing how this links with mathematical physics, than Nigel Hitchin. Oxford University's Savilian Professor of Geometry has made fundamental contributions in areas as diverse as: spin geometry, instanton and monopole equations, twistor theory, symplectic geometry of moduli spaces, integrables systems, Higgs bundles, Einstein metrics, hyperkähler geometry, Frobenius manifolds, Painlevé equations, special Lagrangian geometry and mirror symmetry, theory of grebes, and many more. He was previously Rouse Ball Professor of Mathematics at Cambridge University, as well as Professor of Mathematics at the University of Warwick, is a Fellow of the Royal Society and has been the President of the London Mathematical Society. The chapters in this book, written by some of the greats in their fields (including four Fields Medalists), show how Hitchin's ideas have impacted on a wide variety of subjects. The book grew out of the Geometry Conference in Honour of Nigel Hitchin, held in Madrid.

Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely ...
More

Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms: (i) Probability: degrees of belief should be probabilities; (ii) Calibration: they should be calibrated with evidence; and (iii) Equivocation: they should otherwise equivocate between basic outcomes. Objective Bayesianism has been challenged on a number of different fronts: for example, it has been accused of being poorly motivated, of failing to handle qualitative evidence, of yielding counter‐intuitive degrees of belief after updating, of suffering from a failure to learn from experience, of being computationally intractable, of being susceptible to paradox, of being language dependent, and of not being objective enough. The book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.Less

In Defence of Objective Bayesianism

Jon Williamson

Published in print: 2010-04-29

Bayesian epistemology aims to answer the following question: How strongly should an agent believe the various propositions expressible in her language? Subjective Bayesians hold that.it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms: (i) Probability: degrees of belief should be probabilities; (ii) Calibration: they should be calibrated with evidence; and (iii) Equivocation: they should otherwise equivocate between basic outcomes. Objective Bayesianism has been challenged on a number of different fronts: for example, it has been accused of being poorly motivated, of failing to handle qualitative evidence, of yielding counter‐intuitive degrees of belief after updating, of suffering from a failure to learn from experience, of being computationally intractable, of being susceptible to paradox, of being language dependent, and of not being objective enough. The book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.

A universal goal of technological development is the enrichment of human life. The fast developments of nano/micro technologies enable us to directly handle a single cell or a single molecule. This ...
More

A universal goal of technological development is the enrichment of human life. The fast developments of nano/micro technologies enable us to directly handle a single cell or a single molecule. This capability propels the molecular based diagnostic and therapeutic technologies to a new horizon. In this book, we not only examine the-state- of-art biotechnologies, but also present many cutting-edge research topics which will lead toward the next generation technologies for improving human health. With the capabilities of moving, stopping, mixing and concentrating minute amount of fluid and/or particles, microfluidic circuitry provides unprecedented functions for advancing sample preparation and cell culture processes. Integrated bio-marker sensors with microfluidics, it becomes possible to detect diseases in extremely sensitive and specific manner. With the cutting edge optical techniques and proper surface molecular modification, we can study and manipulate biological processes in live cell. While we have made significant progresses in studying and controlling the phenomena in the nano/micro scale, human health is a system issue with a length in the order of a meter. A disparity of several orders of magnitude in the length scales presents significant challenges. Our current and future tasks are to develop the seamless integration processes from materials through devices and eventually into engineering systems. Our ultimate goal is that these nano/micro technology based systems can effectively interface and direct the biological complex system toward desired fate.Less

Published in print: 2010-03-25

A universal goal of technological development is the enrichment of human life. The fast developments of nano/micro technologies enable us to directly handle a single cell or a single molecule. This capability propels the molecular based diagnostic and therapeutic technologies to a new horizon. In this book, we not only examine the-state- of-art biotechnologies, but also present many cutting-edge research topics which will lead toward the next generation technologies for improving human health. With the capabilities of moving, stopping, mixing and concentrating minute amount of fluid and/or particles, microfluidic circuitry provides unprecedented functions for advancing sample preparation and cell culture processes. Integrated bio-marker sensors with microfluidics, it becomes possible to detect diseases in extremely sensitive and specific manner. With the cutting edge optical techniques and proper surface molecular modification, we can study and manipulate biological processes in live cell. While we have made significant progresses in studying and controlling the phenomena in the nano/micro scale, human health is a system issue with a length in the order of a meter. A disparity of several orders of magnitude in the length scales presents significant challenges. Our current and future tasks are to develop the seamless integration processes from materials through devices and eventually into engineering systems. Our ultimate goal is that these nano/micro technology based systems can effectively interface and direct the biological complex system toward desired fate.

Stochastic geometry is a subject with roots stretching back at least 300 years, but one which has only been formed as an academic area in the last 50 years. It covers the study of random patterns, ...
More

Stochastic geometry is a subject with roots stretching back at least 300 years, but one which has only been formed as an academic area in the last 50 years. It covers the study of random patterns, their probability theory, and the challenging problems raised by their statistical analysis. It has grown rapidly in response to challenges in all kinds of applied science, from image analysis through to materials science. Recently, still more stimulus has arisen from exciting new links with rapidly developing areas of mathematics, from fractals through percolation theory to randomized allocation schemes. Coupled with many ongoing developments arising from all sorts of applications, the area is changing and developing rapidly. This book is intended to lay foundations for future research directions by collecting together seventeen chapters contributed by leading researchers in the field, both theoreticians and people involved in applications, surveying these new developments both in theory and in applications. It will introduce and lay foundations for appreciating the fresh perspectives, new ideas, and interdisciplinary connections now arising from stochastic geometry and from other areas of mathematics now connecting to this area.Less

New Perspectives in Stochastic Geometry

Published in print: 2009-11-26

Stochastic geometry is a subject with roots stretching back at least 300 years, but one which has only been formed as an academic area in the last 50 years. It covers the study of random patterns, their probability theory, and the challenging problems raised by their statistical analysis. It has grown rapidly in response to challenges in all kinds of applied science, from image analysis through to materials science. Recently, still more stimulus has arisen from exciting new links with rapidly developing areas of mathematics, from fractals through percolation theory to randomized allocation schemes. Coupled with many ongoing developments arising from all sorts of applications, the area is changing and developing rapidly. This book is intended to lay foundations for future research directions by collecting together seventeen chapters contributed by leading researchers in the field, both theoreticians and people involved in applications, surveying these new developments both in theory and in applications. It will introduce and lay foundations for appreciating the fresh perspectives, new ideas, and interdisciplinary connections now arising from stochastic geometry and from other areas of mathematics now connecting to this area.

Small scale features and processes occurring at nanometer and femtosecond scales have a profound impact on what happens at larger space and time scales. In view of the increasing need of ...
More

Small scale features and processes occurring at nanometer and femtosecond scales have a profound impact on what happens at larger space and time scales. In view of the increasing need of understanding and controlling the behavior of products and processes at multiple scales, multiscale modeling and simulation has emerged as one of the focal research areas in applied science and engineering. The primary objective of this volume is to present the-state-of-the art in multiscale mathematics, modeling and simulations and to address the following barriers: What is the information that needs to be transferred from one model or scale to another and what physical principles must be satisfied during the transfer of information? What are the optimal ways to achieve such transfer of information? How to quantify variability of physical parameters at multiple scales and how to account for it to ensure design robustness? The volume is intended as a reference book for scientists, engineers and graduate students in traditional engineering and science disciplines as well as in the emerging fields of nanotechnology, biotechnology, microelectronics and energy.Less

Multiscale Methods : Bridging the Scales in Science and Engineering

Published in print: 2009-10-01

Small scale features and processes occurring at nanometer and femtosecond scales have a profound impact on what happens at larger space and time scales. In view of the increasing need of understanding and controlling the behavior of products and processes at multiple scales, multiscale modeling and simulation has emerged as one of the focal research areas in applied science and engineering. The primary objective of this volume is to present the-state-of-the art in multiscale mathematics, modeling and simulations and to address the following barriers: What is the information that needs to be transferred from one model or scale to another and what physical principles must be satisfied during the transfer of information? What are the optimal ways to achieve such transfer of information? How to quantify variability of physical parameters at multiple scales and how to account for it to ensure design robustness? The volume is intended as a reference book for scientists, engineers and graduate students in traditional engineering and science disciplines as well as in the emerging fields of nanotechnology, biotechnology, microelectronics and energy.

Generalized dynamic thermoelasticity is a vital area of research in continuum mechanics, free of the classical paradox of infinite propagation speeds of thermal signals in Fourier‐type heat ...
More

Generalized dynamic thermoelasticity is a vital area of research in continuum mechanics, free of the classical paradox of infinite propagation speeds of thermal signals in Fourier‐type heat conduction. Besides that paradox, the classical dynamic thermoelasticity theory offers either unsatisfactory or poor descriptions of a solid's response to a fast transient loading (say, due to short laser pulses) or at low temperatures. Several models were developed and intensively studied over the past four decades, and this book is the first monograph on the subject since the 1970s, aiming to provide a point of reference in the field. It focuses on dynamic thermoelasticity governed by hyperbolic equations, and, in particular, on the two leading theories: that of Lord‐Shulman (with one relaxation time), and that of Green‐Lindsay (with two relaxation times). While the resulting field equations are linear partial differential ones, the complexity of theories is due to the coupling of mechanical with thermal fields. The book is concerned with the mathematical aspects of both theories — existence and uniqueness theorems, domain of influence theorems, convolutional variational principles — as well as with the methods for various initial/boundary value problems. In the latter respect, following the establishment of the central equation of thermoelasticity with finite wave speeds, there are extensive presentations of: the exact, aperiodic‐in‐time solutions of Green‐Lindsay theory; Kirchhoff‐type formulas and integral equations in Green‐Lindsay theory; thermoelastic polynomials; moving discontinuity surfaces; and time‐periodic solutions. This is followed by a chapter on physical aspects of generalized thermoelasticity, with a review of several applications. The book closes with a chapter on a nonlinear hyperbolic theory of a rigid heat conductor for which a number of asymptotic solutions are obtained using a method of weakly nonlinear geometric optics. The book is augmented by an extensive bibliography.Less

Thermoelasticity with Finite Wave Speeds

Józef IgnaczakMartin Ostoja-Starzewski

Published in print: 2009-09-24

Generalized dynamic thermoelasticity is a vital area of research in continuum mechanics, free of the classical paradox of infinite propagation speeds of thermal signals in Fourier‐type heat conduction. Besides that paradox, the classical dynamic thermoelasticity theory offers either unsatisfactory or poor descriptions of a solid's response to a fast transient loading (say, due to short laser pulses) or at low temperatures. Several models were developed and intensively studied over the past four decades, and this book is the first monograph on the subject since the 1970s, aiming to provide a point of reference in the field. It focuses on dynamic thermoelasticity governed by hyperbolic equations, and, in particular, on the two leading theories: that of Lord‐Shulman (with one relaxation time), and that of Green‐Lindsay (with two relaxation times). While the resulting field equations are linear partial differential ones, the complexity of theories is due to the coupling of mechanical with thermal fields. The book is concerned with the mathematical aspects of both theories — existence and uniqueness theorems, domain of influence theorems, convolutional variational principles — as well as with the methods for various initial/boundary value problems. In the latter respect, following the establishment of the central equation of thermoelasticity with finite wave speeds, there are extensive presentations of: the exact, aperiodic‐in‐time solutions of Green‐Lindsay theory; Kirchhoff‐type formulas and integral equations in Green‐Lindsay theory; thermoelastic polynomials; moving discontinuity surfaces; and time‐periodic solutions. This is followed by a chapter on physical aspects of generalized thermoelasticity, with a review of several applications. The book closes with a chapter on a nonlinear hyperbolic theory of a rigid heat conductor for which a number of asymptotic solutions are obtained using a method of weakly nonlinear geometric optics. The book is augmented by an extensive bibliography.

The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering ...
More

The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering applications. This book deals with shape optimization problems for fluids, with the equations needed for their understanding (Euler and Navier Strokes, but also those for microfluids) and with the numerical simulation of these problems. It presents the state of the art in shape optimization for an extended range of applications involving fluid flows. Automatic differentiation, approximate gradients, unstructured mesh adaptation, multi-model configurations, and time-dependent problems are introduced, and their implementation into the industrial environments of aerospace and automobile equipment industry explained and illustrated. With the increases in the power of computers in industry since the first edition of this book, methods which were previously unfeasible have begun giving results, namely evolutionary algorithms, topological optimization methods, and level set algorithms. In this edition, these methods have been treated in separate chapters, but the book remains primarily one on differential shape optimization.Less

Applied Shape Optimization for Fluids

Bijan MohammadiOlivier Pironneau

Published in print: 2009-09-24

The fields of computational fluid dynamics (CFD) and optimal shape design (OSD) have received considerable attention in the recent past, and are of practical importance for many engineering applications. This book deals with shape optimization problems for fluids, with the equations needed for their understanding (Euler and Navier Strokes, but also those for microfluids) and with the numerical simulation of these problems. It presents the state of the art in shape optimization for an extended range of applications involving fluid flows. Automatic differentiation, approximate gradients, unstructured mesh adaptation, multi-model configurations, and time-dependent problems are introduced, and their implementation into the industrial environments of aerospace and automobile equipment industry explained and illustrated. With the increases in the power of computers in industry since the first edition of this book, methods which were previously unfeasible have begun giving results, namely evolutionary algorithms, topological optimization methods, and level set algorithms. In this edition, these methods have been treated in separate chapters, but the book remains primarily one on differential shape optimization.

This book discusses novel advances in informatics and statistics in molecular cancer research. Through eight chapters it discusses specific topics in cancer research, talks about how the topics give ...
More

This book discusses novel advances in informatics and statistics in molecular cancer research. Through eight chapters it discusses specific topics in cancer research, talks about how the topics give rise to development of new informatics and statistics tools, and explains how the tools can be applied. The focus of the book is to provide an understanding of key concepts and tools, rather than focusing on technical issues. A main theme is the extensive use of array technologies in modern cancer research — gene expression and exon arrays, SNP and copy number arrays and methylation arrays — to derive quantitative and qualitative statements about cancer, its progression and aetiology, and to understand how these technologies at one hand allow us learn about cancer tissue as a complex system and at the other hand allow us to pinpoint key genes and events as crucial for the development of the disease. Cancer is characterized by genetic and genomic alterations that influence all levels of the cell's machinery and function.Less

Statistics and Informatics in Molecular Cancer Research

Published in print: 2009-06-18

This book discusses novel advances in informatics and statistics in molecular cancer research. Through eight chapters it discusses specific topics in cancer research, talks about how the topics give rise to development of new informatics and statistics tools, and explains how the tools can be applied. The focus of the book is to provide an understanding of key concepts and tools, rather than focusing on technical issues. A main theme is the extensive use of array technologies in modern cancer research — gene expression and exon arrays, SNP and copy number arrays and methylation arrays — to derive quantitative and qualitative statements about cancer, its progression and aetiology, and to understand how these technologies at one hand allow us learn about cancer tissue as a complex system and at the other hand allow us to pinpoint key genes and events as crucial for the development of the disease. Cancer is characterized by genetic and genomic alterations that influence all levels of the cell's machinery and function.

How could we use living cells to perform computation? Would our definition of computation change as a consequence of this? Could such a cell-computer outperform digital computers? These are some of ...
More

How could we use living cells to perform computation? Would our definition of computation change as a consequence of this? Could such a cell-computer outperform digital computers? These are some of the questions that the study of Membrane Computing tries to answer and are at the base of what is treated by this monograph. Descriptional and computational complexity of models in Membrane Computing are the two lines of research on which is the focus here. In this context this book reports the results of only some of the models present in this framework. The models considered here represent a very relevant part of all the models introduced so far in the study of Membrane Computing. They are in between the most studied models in the field and they cover a broad range of features (using symbol objects or string objects, based only on communications, inspired by intra- and intercellular processes, having or not having a tree as underlying structure, etc.) that gives a grasp of the enormous flexibility of this framework. Links with biology and Petri nets are constant through this book. This book aims also to inspire research. This book gives suggestions for research of various levels of difficulty and this book clearly indicates their importance and the relevance of the possible outcomes. Readers new to this field of research will find the provided examples particularly useful in the understanding of the treated topics.Less

Computing with Cells : Advances in Membrane Computing

Pierluigi Frisco

Published in print: 2009-05-21

How could we use living cells to perform computation? Would our definition of computation change as a consequence of this? Could such a cell-computer outperform digital computers? These are some of the questions that the study of Membrane Computing tries to answer and are at the base of what is treated by this monograph. Descriptional and computational complexity of models in Membrane Computing are the two lines of research on which is the focus here. In this context this book reports the results of only some of the models present in this framework. The models considered here represent a very relevant part of all the models introduced so far in the study of Membrane Computing. They are in between the most studied models in the field and they cover a broad range of features (using symbol objects or string objects, based only on communications, inspired by intra- and intercellular processes, having or not having a tree as underlying structure, etc.) that gives a grasp of the enormous flexibility of this framework. Links with biology and Petri nets are constant through this book. This book aims also to inspire research. This book gives suggestions for research of various levels of difficulty and this book clearly indicates their importance and the relevance of the possible outcomes. Readers new to this field of research will find the provided examples particularly useful in the understanding of the treated topics.

Credit scoring — the quantitative and statistical techniques which assess the credit risks when lending to consumers — has been one of the most successful if unsung applications of mathematics in ...
More

Credit scoring — the quantitative and statistical techniques which assess the credit risks when lending to consumers — has been one of the most successful if unsung applications of mathematics in business for the last fifty years. Now though, credit scoring is beginning to be used in relation to other decisions rather than the traditional one of assessing the default risk of a potential borrower. Lenders are changing their objectives from minimizing defaults to maximizing profits; using the internet and the telephone as application channels means lenders can price or customize their loans for individual consumers. The introduction of the Basel Capital Accord banking regulations and the credit crunch following the problems with securitizing sub prime mortgage mean one needs to be able to extend the default risk models from individual consumer loans to portfolios of such loans. Addressing these challenges requires new models that use credit scores as inputs. These in turn require extensions of what is meant by a credit score. This book reviews the current methodology for building scorecards, clarifies what a credit score really is, and the way that scoring systems are measured. It then looks at the models that can be used to address a number of these new challenges: how to obtain profitability based scoring systems; pricing new loans in a way that reflects their risk and also customise them to attract consumers; how the Basel Accord impacts on way credit scoring; and how credit scoring can be extended to assess the credit risk of portfolios of loans.Less

Consumer Credit Models : Pricing, Profit and Portfolios

Lyn C. Thomas

Published in print: 2009-01-29

Credit scoring — the quantitative and statistical techniques which assess the credit risks when lending to consumers — has been one of the most successful if unsung applications of mathematics in business for the last fifty years. Now though, credit scoring is beginning to be used in relation to other decisions rather than the traditional one of assessing the default risk of a potential borrower. Lenders are changing their objectives from minimizing defaults to maximizing profits; using the internet and the telephone as application channels means lenders can price or customize their loans for individual consumers. The introduction of the Basel Capital Accord banking regulations and the credit crunch following the problems with securitizing sub prime mortgage mean one needs to be able to extend the default risk models from individual consumer loans to portfolios of such loans. Addressing these challenges requires new models that use credit scores as inputs. These in turn require extensions of what is meant by a credit score. This book reviews the current methodology for building scorecards, clarifies what a credit score really is, and the way that scoring systems are measured. It then looks at the models that can be used to address a number of these new challenges: how to obtain profitability based scoring systems; pricing new loans in a way that reflects their risk and also customise them to attract consumers; how the Basel Accord impacts on way credit scoring; and how credit scoring can be extended to assess the credit risk of portfolios of loans.

The complexity and the randomness aspect of a set of natural numbers are closely related. Traditionally, computability theory is concerned with the complexity aspect. However, computability theoretic ...
More

The complexity and the randomness aspect of a set of natural numbers are closely related. Traditionally, computability theory is concerned with the complexity aspect. However, computability theoretic tools can also be used to introduce mathematical counterparts for the intuitive notion of randomness of a set. Recent research shows that, conversely, concepts and methods originating from randomness enrich computability theory. The book is about these two aspects of sets of natural numbers and about their interplay. For the first aspect, lowness and highness properties of sets are introduced. For the second aspect, firstly randomness of finite objects are studied, and then randomness of sets of natural numbers. A hierarchy of mathematical randomness notions is established. Each notion matches the intuition idea of randomness to some extent. The advantages and drawbacks of notions weaker and stronger than Martin-Löf randomness are discussed. The main topic is the interplay of the computability and randomness aspects. Research on this interplay has advanced rapidly in recent years. One chapter focuses on injury-free solutions to Post's problem. A core chapter contains a comprehensible treatment of lowness properties below the halting problem, and how they relate to K triviality. Each chapter exposes how the complexity properties are related to randomness. The book also contains analogs in the area of higher computability theory of results from the preceding chapters, reflecting very recent research.Less

Computability and Randomness

André Nies

Published in print: 2009-01-29

The complexity and the randomness aspect of a set of natural numbers are closely related. Traditionally, computability theory is concerned with the complexity aspect. However, computability theoretic tools can also be used to introduce mathematical counterparts for the intuitive notion of randomness of a set. Recent research shows that, conversely, concepts and methods originating from randomness enrich computability theory. The book is about these two aspects of sets of natural numbers and about their interplay. For the first aspect, lowness and highness properties of sets are introduced. For the second aspect, firstly randomness of finite objects are studied, and then randomness of sets of natural numbers. A hierarchy of mathematical randomness notions is established. Each notion matches the intuition idea of randomness to some extent. The advantages and drawbacks of notions weaker and stronger than Martin-Löf randomness are discussed. The main topic is the interplay of the computability and randomness aspects. Research on this interplay has advanced rapidly in recent years. One chapter focuses on injury-free solutions to Post's problem. A core chapter contains a comprehensible treatment of lowness properties below the halting problem, and how they relate to K triviality. Each chapter exposes how the complexity properties are related to randomness. The book also contains analogs in the area of higher computability theory of results from the preceding chapters, reflecting very recent research.

General Relativity has passed all experimental and observational tests to model the motion of isolated bodies with strong gravitational fields, though the mathematical and numerical study of these ...
More

General Relativity has passed all experimental and observational tests to model the motion of isolated bodies with strong gravitational fields, though the mathematical and numerical study of these motions is still in its infancy. It is believed that General Relativity models our cosmos, with a manifold of dimensions possibly greater than four and debatable topology opening a vast field of investigation for mathematicians and physicists alike. Remarkable conjectures have been proposed, many results have been obtained but many fundamental questions remain open. This book overviews the basic ideas in General Relativity, introduces the necessary mathematics and discusses some of the key open questions in the field.Less

General Relativity and the Einstein Equations

Yvonne Choquet-Bruhat

Published in print: 2008-12-04

General Relativity has passed all experimental and observational tests to model the motion of isolated bodies with strong gravitational fields, though the mathematical and numerical study of these motions is still in its infancy. It is believed that General Relativity models our cosmos, with a manifold of dimensions possibly greater than four and debatable topology opening a vast field of investigation for mathematicians and physicists alike. Remarkable conjectures have been proposed, many results have been obtained but many fundamental questions remain open. This book overviews the basic ideas in General Relativity, introduces the necessary mathematics and discusses some of the key open questions in the field.

Wavelets have become a powerful tool in several applications by now. Their use for the numerical solution of operator equations has been investigated more recently. By now the theoretical ...
More

Wavelets have become a powerful tool in several applications by now. Their use for the numerical solution of operator equations has been investigated more recently. By now the theoretical understanding of such methods is quite advanced and has brought up deep results and additional understanding. Moreover, the rigorous theoretical foundation of wavelet bases has also lead to new insights in more classical numerical methods for partial differential equations (pde's) such as Finite Elements. However, sometimes it is believed that understanding and applying the full power of wavelets needs a strong mathematical background in functional analysis and approximation theory. The main idea of this book is to introduce the main concepts and results of wavelet methods for solving linear elliptic partial differential equations in a framework that allows avoiding technicalities to a maximum extend. On the other hand, the book also describes recent research including adaptive methods also for nonlinear problems, wavelets on general domains and applications.Less

Wavelet Methods for Elliptic Partial Differential Equations

Karsten Urban

Published in print: 2008-11-27

Wavelets have become a powerful tool in several applications by now. Their use for the numerical solution of operator equations has been investigated more recently. By now the theoretical understanding of such methods is quite advanced and has brought up deep results and additional understanding. Moreover, the rigorous theoretical foundation of wavelet bases has also lead to new insights in more classical numerical methods for partial differential equations (pde's) such as Finite Elements. However, sometimes it is believed that understanding and applying the full power of wavelets needs a strong mathematical background in functional analysis and approximation theory. The main idea of this book is to introduce the main concepts and results of wavelet methods for solving linear elliptic partial differential equations in a framework that allows avoiding technicalities to a maximum extend. On the other hand, the book also describes recent research including adaptive methods also for nonlinear problems, wavelets on general domains and applications.

Fibonacci and Lucas sequences are “two shining stars in the vast array of integer sequences,” and because of their ubiquitousness, tendency to appear in quite unexpected and unrelated places, ...
More

Fibonacci and Lucas sequences are “two shining stars in the vast array of integer sequences,” and because of their ubiquitousness, tendency to appear in quite unexpected and unrelated places, abundant applications, and intriguing properties, they have fascinated amateurs and mathematicians alike. However, Catalan numbers are even more fascinating. Like the North Star in the evening sky, they are a beautiful and bright light in the mathematical heavens. They continue to provide a fertile ground for number theorists, especially, Catalan enthusiasts and computer scientists. Since the publication of Euler's triangulation problem (1751) and Catalan's parenthesization problem (1838), over 400 articles and problems on Catalan numbers have appeared in various periodicals. As Martin Gardner noted, even though many amateurs and mathematicians may know the abc's of Catalan sequence, they may not be familiar with their myriad unexpected occurrences, delightful applications, properties, or the beautiful and surprising relationships among numerous examples. Like Fibonacci and Lucas numbers, Catalan numbers are also an excellent source of fun and excitement. They can be used to generate interesting dividends for students, such as intellectual curiosity, experimentation, pattern recognition, conjecturing, and problem-solving techniques. The central character in the nth Catalan number is the central binomial coefficient. So, Catalan numbers can be extracted from Pascal's triangle. In fact, there are a number of ways they can be read from Pascal's triangle; every one of them is described and exemplified. This brings Catalan numbers a step closer to number-theory enthusiasts, especially.Less

Catalan Numbers with Applications

Thomas Koshy

Published in print: 2008-11-09

Fibonacci and Lucas sequences are “two shining stars in the vast array of integer sequences,” and because of their ubiquitousness, tendency to appear in quite unexpected and unrelated places, abundant applications, and intriguing properties, they have fascinated amateurs and mathematicians alike. However, Catalan numbers are even more fascinating. Like the North Star in the evening sky, they are a beautiful and bright light in the mathematical heavens. They continue to provide a fertile ground for number theorists, especially, Catalan enthusiasts and computer scientists. Since the publication of Euler's triangulation problem (1751) and Catalan's parenthesization problem (1838), over 400 articles and problems on Catalan numbers have appeared in various periodicals. As Martin Gardner noted, even though many amateurs and mathematicians may know the abc's of Catalan sequence, they may not be familiar with their myriad unexpected occurrences, delightful applications, properties, or the beautiful and surprising relationships among numerous examples. Like Fibonacci and Lucas numbers, Catalan numbers are also an excellent source of fun and excitement. They can be used to generate interesting dividends for students, such as intellectual curiosity, experimentation, pattern recognition, conjecturing, and problem-solving techniques. The central character in the nth Catalan number is the central binomial coefficient. So, Catalan numbers can be extracted from Pascal's triangle. In fact, there are a number of ways they can be read from Pascal's triangle; every one of them is described and exemplified. This brings Catalan numbers a step closer to number-theory enthusiasts, especially.

This book is the first book of a series of three that provides an overview of all aspects, steps, and issues that should be considered when undertaking credit risk management, including the Basel II ...
More

This book is the first book of a series of three that provides an overview of all aspects, steps, and issues that should be considered when undertaking credit risk management, including the Basel II Capital Accord, which all major banks must comply with in 2008. The introduction of the recently suggested Basel II Capital Accord has raised many issues and concerns about how to appropriately manage credit risk. Managing credit risk is one of the next big challenges facing financial institutions. The importance and relevance of efficiently managing credit risk is evident from the huge investments that many financial institutions are making in this area, the booming credit industry in emerging economies (e.g. Brazil, China, India), the many events (courses, seminars, workshops) that are being organised on this topic, and the emergence of new academic journals and magazines in the field (e.g., Journal of Credit Risk,Journal of Risk Model Validation, Journal of Risk Management in Financial Institutions). Financial risk management, an area of increasing importance with the recent Basel II developments, is discussed in terms of practical business impact and the increasing profitability competition, laying the foundation for the other two books in the series.Less

Tony Van GestelBart Baesens

Published in print: 2008-10-23

This book is the first book of a series of three that provides an overview of all aspects, steps, and issues that should be considered when undertaking credit risk management, including the Basel II Capital Accord, which all major banks must comply with in 2008. The introduction of the recently suggested Basel II Capital Accord has raised many issues and concerns about how to appropriately manage credit risk. Managing credit risk is one of the next big challenges facing financial institutions. The importance and relevance of efficiently managing credit risk is evident from the huge investments that many financial institutions are making in this area, the booming credit industry in emerging economies (e.g. Brazil, China, India), the many events (courses, seminars, workshops) that are being organised on this topic, and the emergence of new academic journals and magazines in the field (e.g., Journal of Credit Risk,Journal of Risk Model Validation, Journal of Risk Management in Financial Institutions). Financial risk management, an area of increasing importance with the recent Basel II developments, is discussed in terms of practical business impact and the increasing profitability competition, laying the foundation for the other two books in the series.

There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various ...
More

There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various physical systems in view, this book is a collection of topical survey articles by leading researchers in both fields, working on the mathematical description of growth phenomena in the broadest sense. The main aim of the book is to foster interaction between researchers in probability and analysis, and to inspire joint efforts to attack important physical problems. Mathematical methods discussed in the book comprise large deviation theory, lace expansion, harmonic analysis, multi-scale techniques, and homogenization of partial differential equations. Models based on the physics of individual particles are discussed alongside models based on the continuum description of large collections of particles, and the mathematical theories are used to describe physical phenomena such as droplet formation, Bose–Einstein condensation, Anderson localization, Ostwald ripening, or the formation of the early universe.Less

Analysis and Stochastics of Growth Processes and Interface Models

Published in print: 2008-07-24

There has been a significant increase recently in activities on the interface between applied analysis and probability theory. With the potential of a combined approach to the study of various physical systems in view, this book is a collection of topical survey articles by leading researchers in both fields, working on the mathematical description of growth phenomena in the broadest sense. The main aim of the book is to foster interaction between researchers in probability and analysis, and to inspire joint efforts to attack important physical problems. Mathematical methods discussed in the book comprise large deviation theory, lace expansion, harmonic analysis, multi-scale techniques, and homogenization of partial differential equations. Models based on the physics of individual particles are discussed alongside models based on the continuum description of large collections of particles, and the mathematical theories are used to describe physical phenomena such as droplet formation, Bose–Einstein condensation, Anderson localization, Ostwald ripening, or the formation of the early universe.

Lord Kelvin was one of the greatest physicists of the Victorian era. Widely known for the development of the Kelvin scale of temperature measurement, Kelvin's interests ranged across thermodynamics, ...
More

Lord Kelvin was one of the greatest physicists of the Victorian era. Widely known for the development of the Kelvin scale of temperature measurement, Kelvin's interests ranged across thermodynamics, the age of the Earth, the laying of the first transatlantic telegraph cable, not to mention inventions such as an improved maritime compass and a sounding device, which allowed depths to be taken both quickly and while the ship was moving. He was an academic engaged in fundamental research, while also working with industry and technological advances. He corresponded and collaborated with other eminent men of science such as Stokes, Joule, Maxwell, and Helmholtz; was raised to the peerage as a result of his contributions to science, and finally buried in Westminster Abbey next to Newton. This book contains a collection of chapters covering the life and wide-ranging scientific contributions made by William Thomson, Lord Kelvin (1824-1907).Less

Kelvin: Life, Labours and Legacy

Published in print: 2008-04-10

Lord Kelvin was one of the greatest physicists of the Victorian era. Widely known for the development of the Kelvin scale of temperature measurement, Kelvin's interests ranged across thermodynamics, the age of the Earth, the laying of the first transatlantic telegraph cable, not to mention inventions such as an improved maritime compass and a sounding device, which allowed depths to be taken both quickly and while the ship was moving. He was an academic engaged in fundamental research, while also working with industry and technological advances. He corresponded and collaborated with other eminent men of science such as Stokes, Joule, Maxwell, and Helmholtz; was raised to the peerage as a result of his contributions to science, and finally buried in Westminster Abbey next to Newton. This book contains a collection of chapters covering the life and wide-ranging scientific contributions made by William Thomson, Lord Kelvin (1824-1907).

This book is devoted to problems of shape identification in the context of (inverse) scattering problems and problems of impedance tomography. In contrast to traditional methods which are based on ...
More

This book is devoted to problems of shape identification in the context of (inverse) scattering problems and problems of impedance tomography. In contrast to traditional methods which are based on iterative schemes of solving sequences of corresponding direct problems, this book presents a completely different method. The Factorization Method avoids the need to solve the (time consuming) direct problems. Furthermore, no a-priori information about the type of scatterer (penetrable or impenetrable), type of boundary condition, or number of components is needed. The Factorization Method can be considered as an example of a Sampling Method. The book aims to construct a binary criterium on the known data to decide whether or not a given point z is inside or outside the unknown domain D. By choosing a grid of sampling points z in a region known to contain D, the characteristic function of D can be computed (in the case of finite data only approximately). The book also introduces some alternative Sampling Methods.Less

The Factorization Method for Inverse Problems

Andreas KirschNatalia Grinberg

Published in print: 2007-12-13

This book is devoted to problems of shape identification in the context of (inverse) scattering problems and problems of impedance tomography. In contrast to traditional methods which are based on iterative schemes of solving sequences of corresponding direct problems, this book presents a completely different method. The Factorization Method avoids the need to solve the (time consuming) direct problems. Furthermore, no a-priori information about the type of scatterer (penetrable or impenetrable), type of boundary condition, or number of components is needed. The Factorization Method can be considered as an example of a Sampling Method. The book aims to construct a binary criterium on the known data to decide whether or not a given point z is inside or outside the unknown domain D. By choosing a grid of sampling points z in a region known to contain D, the characteristic function of D can be computed (in the case of finite data only approximately). The book also introduces some alternative Sampling Methods.

Sasakian manifolds were first introduced in 1962. This book's main focus is on the intricate relationship between Sasakian and Kähler geometries, especially when the Kähler structure is that of an ...
More

Sasakian manifolds were first introduced in 1962. This book's main focus is on the intricate relationship between Sasakian and Kähler geometries, especially when the Kähler structure is that of an algebraic variety. The book is divided into three parts. The first five chapters carefully prepare the stage for the proper introduction of the subject. After a brief discussion of G-structures, the reader is introduced to the theory of Riemannian foliations. A concise review of complex and Kähler geometry precedes a fairly detailed treatment of compact complex Kähler orbifolds. A discussion of the existence and obstruction theory of Kähler-Einstein metrics (Monge-Ampère problem) on complex compact orbifolds follows. The second part gives a careful discussion of contact structures in the Riemannian setting. Compact quasi-regular Sasakian manifolds emerge here as algebraic objects: they are orbifold circle bundles over compact projective algebraic orbifolds. After a discussion of symmetries of Sasakian manifolds in Chapter 8, the book looks at Sasakian structures on links of isolated hypersurface singularities in Chapter 9. What follows is a study of compact Sasakian manifolds in dimensions three and five focusing on the important notion of positivity. The latter is crucial in understanding the existence of Sasaki-Einstein and 3-Sasakian metrics, which are studied in Chapters 11 and 13. Chapter 12 gives a fairly brief description of quaternionic geometry which is a prerequisite for Chapter 13. The study of Sasaki-Einstein geometry was the original motivation for the book. The final chapter on Killing spinors discusses the properties of Sasaki-Einstein manifolds, which allow them to play an important role as certain models in the supersymmetric field theories of theoretical physics.Less

Sasakian Geometry

Charles BoyerKrzysztof Galicki

Published in print: 2007-10-01

Sasakian manifolds were first introduced in 1962. This book's main focus is on the intricate relationship between Sasakian and Kähler geometries, especially when the Kähler structure is that of an algebraic variety. The book is divided into three parts. The first five chapters carefully prepare the stage for the proper introduction of the subject. After a brief discussion of G-structures, the reader is introduced to the theory of Riemannian foliations. A concise review of complex and Kähler geometry precedes a fairly detailed treatment of compact complex Kähler orbifolds. A discussion of the existence and obstruction theory of Kähler-Einstein metrics (Monge-Ampère problem) on complex compact orbifolds follows. The second part gives a careful discussion of contact structures in the Riemannian setting. Compact quasi-regular Sasakian manifolds emerge here as algebraic objects: they are orbifold circle bundles over compact projective algebraic orbifolds. After a discussion of symmetries of Sasakian manifolds in Chapter 8, the book looks at Sasakian structures on links of isolated hypersurface singularities in Chapter 9. What follows is a study of compact Sasakian manifolds in dimensions three and five focusing on the important notion of positivity. The latter is crucial in understanding the existence of Sasaki-Einstein and 3-Sasakian metrics, which are studied in Chapters 11 and 13. Chapter 12 gives a fairly brief description of quaternionic geometry which is a prerequisite for Chapter 13. The study of Sasaki-Einstein geometry was the original motivation for the book. The final chapter on Killing spinors discusses the properties of Sasaki-Einstein manifolds, which allow them to play an important role as certain models in the supersymmetric field theories of theoretical physics.

The minimal model program in algebraic geometry is a conjectural sequence of algebraic surgery operations that simplifies any algebraic variety to a point where it can be decomposed into pieces with ...
More

The minimal model program in algebraic geometry is a conjectural sequence of algebraic surgery operations that simplifies any algebraic variety to a point where it can be decomposed into pieces with negative, zero, and positive curvature, in a similar vein as the geometrization program in topology decomposes a three-manifold into pieces with a standard geometry. The last few years have seen dramatic advances in the minimal model program for higher dimensional algebraic varieties, with the proof of the existence of minimal models under appropriate conditions, and the prospect within a few years of having a complete generalization of the minimal model program and the classification of varieties in all dimensions, comparable to the known results for surfaces and 3-folds. This edited collection of chapters, authored by leading experts, provides a complete and self-contained construction of 3-fold and 4-fold flips, and n-dimensional flips assuming minimal models in dimension n-1. A large part of the text is an elaboration of the work of Shokurov, and a complete and pedagogical proof of the existence of 3-fold flips is presented. The book contains a self-contained treatment of many topics that could only be found, with difficulty, in the specialized literature. The text includes a ten-page glossary.Less

Flips for 3-folds and 4-folds

Published in print: 2007-06-01

The minimal model program in algebraic geometry is a conjectural sequence of algebraic surgery operations that simplifies any algebraic variety to a point where it can be decomposed into pieces with negative, zero, and positive curvature, in a similar vein as the geometrization program in topology decomposes a three-manifold into pieces with a standard geometry. The last few years have seen dramatic advances in the minimal model program for higher dimensional algebraic varieties, with the proof of the existence of minimal models under appropriate conditions, and the prospect within a few years of having a complete generalization of the minimal model program and the classification of varieties in all dimensions, comparable to the known results for surfaces and 3-folds. This edited collection of chapters, authored by leading experts, provides a complete and self-contained construction of 3-fold and 4-fold flips, and n-dimensional flips assuming minimal models in dimension n-1. A large part of the text is an elaboration of the work of Shokurov, and a complete and pedagogical proof of the existence of 3-fold flips is presented. The book contains a self-contained treatment of many topics that could only be found, with difficulty, in the specialized literature. The text includes a ten-page glossary.

Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and ...
More

Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and inspired generations of students and researchers in mathematics. This book summarizes and reviews the consistent themes from his work through a series of articles written by renowned experts. These articles, presented as chapters, contain original research work, set in a broader context by the inclusion of review material.Less

Combinatorics, Complexity, and Chance : A Tribute to Dominic Welsh

Published in print: 2007-01-18

Professor Dominic Welsh has made significant contributions to the fields of combinatorics and discrete probability, including matroids, complexity, and percolation. He has taught, influenced, and inspired generations of students and researchers in mathematics. This book summarizes and reviews the consistent themes from his work through a series of articles written by renowned experts. These articles, presented as chapters, contain original research work, set in a broader context by the inclusion of review material.

This book presents a view of the state of the art in multi-dimensional hyperbolic partial differential equations, with a particular emphasis on problems in which modern tools of analysis have proved ...
More

This book presents a view of the state of the art in multi-dimensional hyperbolic partial differential equations, with a particular emphasis on problems in which modern tools of analysis have proved useful. Ordered in sections of gradually increasing degrees of difficulty, the text first covers linear Cauchy problems and linear initial boundary value problems, before moving on to nonlinear problems, including shock waves. The book finishes with a discussion of the application of hyperbolic PDEs to gas dynamics, culminating with the shock wave analysis for real fluids.Less

Sylvie Benzoni-GavageDenis Serre

Published in print: 2006-11-23

This book presents a view of the state of the art in multi-dimensional hyperbolic partial differential equations, with a particular emphasis on problems in which modern tools of analysis have proved useful. Ordered in sections of gradually increasing degrees of difficulty, the text first covers linear Cauchy problems and linear initial boundary value problems, before moving on to nonlinear problems, including shock waves. The book finishes with a discussion of the application of hyperbolic PDEs to gas dynamics, culminating with the shock wave analysis for real fluids.

The heat equation is one of the three classical linear partial differential equations of second order that form the basis of any elementary introduction to the area of PDEs, and only recently has it ...
More

The heat equation is one of the three classical linear partial differential equations of second order that form the basis of any elementary introduction to the area of PDEs, and only recently has it come to be fairly well understood. This book provides a presentation of the mathematical theory of the nonlinear heat equation usually called the Porous Medium Equation (PME). This equation appears in a number of physical applications, such as to describe processes involving fluid flow, heat transfer, or diffusion. Other applications have been proposed in mathematical biology, lubrication, boundary layer theory, and other fields. Each chapter contains a detailed introduction and is supplied with a section of notes, providing comments, historical notes or recommended reading, and exercises.Less

The Porous Medium Equation : Mathematical Theory

Juan Luis Vazquez

Published in print: 2006-10-26

The heat equation is one of the three classical linear partial differential equations of second order that form the basis of any elementary introduction to the area of PDEs, and only recently has it come to be fairly well understood. This book provides a presentation of the mathematical theory of the nonlinear heat equation usually called the Porous Medium Equation (PME). This equation appears in a number of physical applications, such as to describe processes involving fluid flow, heat transfer, or diffusion. Other applications have been proposed in mathematical biology, lubrication, boundary layer theory, and other fields. Each chapter contains a detailed introduction and is supplied with a section of notes, providing comments, historical notes or recommended reading, and exercises.

This text focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis on the magnetohydrodynamics of liquid metals, on two-fluid flows, and ...
More

This text focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis on the magnetohydrodynamics of liquid metals, on two-fluid flows, and on a prototypical industrial application. The approach is a highly mathematical one, based on the rigorous analysis of the equations at hand, and a solid numerical analysis of the discretization methods. Up-to-date techniques, both on the theoretical side and the numerical side, are introduced to deal with the nonlinearities of the multifluid magnetohydrodynamics equations. At each stage of the exposition, examples of numerical simulations are provided, first on academic test cases to illustrate the approach, next on benchmarks well documented in the professional literature, and finally on real industrial cases. The simulation of aluminium electrolysis cells is used as a guideline throughout the book to motivate the study of a particular setting of the magnetohydrodynamics equations.Less

Mathematical Methods for the Magnetohydrodynamics of Liquid Metals

Jean-Frédéric GerbeauClaude Le BrisTony Lelièvre

Published in print: 2006-08-31

This text focuses on mathematical and numerical techniques for the simulation of magnetohydrodynamic phenomena, with an emphasis on the magnetohydrodynamics of liquid metals, on two-fluid flows, and on a prototypical industrial application. The approach is a highly mathematical one, based on the rigorous analysis of the equations at hand, and a solid numerical analysis of the discretization methods. Up-to-date techniques, both on the theoretical side and the numerical side, are introduced to deal with the nonlinearities of the multifluid magnetohydrodynamics equations. At each stage of the exposition, examples of numerical simulations are provided, first on academic test cases to illustrate the approach, next on benchmarks well documented in the professional literature, and finally on real industrial cases. The simulation of aluminium electrolysis cells is used as a guideline throughout the book to motivate the study of a particular setting of the magnetohydrodynamics equations.

This book is concerned with the quantitative aspects of the theory of nonlinear diffusion equations; equations which can be seen as nonlinear variations of the classical heat equation. They appear as ...
More

This book is concerned with the quantitative aspects of the theory of nonlinear diffusion equations; equations which can be seen as nonlinear variations of the classical heat equation. They appear as mathematical models in different branches of physics, chemistry, biology, and engineering, and are also relevant in differential geometry and relativistic physics. Much of the modern theory of such equations is based on estimates and functional analysis. Concentrating on a class of equations with nonlinearities of power type that lead to degenerate or singular parabolicity (equations of porous medium type), the aim of this book is to obtain sharp a priori estimates and decay rates for general classes of solutions in terms of estimates of particular problems. These estimates are the building blocks in understanding the qualitative theory, and the decay rates pave the way to the fine study of asymptotics. Many technically relevant questions are presented and analyzed in detail. A systematic picture of the most relevant phenomena is obtained for the equations under study, including time decay, smoothing, extinction in finite time, and delayed regularity.Less

Juan Luis Vázquez

Published in print: 2006-08-03

This book is concerned with the quantitative aspects of the theory of nonlinear diffusion equations; equations which can be seen as nonlinear variations of the classical heat equation. They appear as mathematical models in different branches of physics, chemistry, biology, and engineering, and are also relevant in differential geometry and relativistic physics. Much of the modern theory of such equations is based on estimates and functional analysis. Concentrating on a class of equations with nonlinearities of power type that lead to degenerate or singular parabolicity (equations of porous medium type), the aim of this book is to obtain sharp a priori estimates and decay rates for general classes of solutions in terms of estimates of particular problems. These estimates are the building blocks in understanding the qualitative theory, and the decay rates pave the way to the fine study of asymptotics. Many technically relevant questions are presented and analyzed in detail. A systematic picture of the most relevant phenomena is obtained for the equations under study, including time decay, smoothing, extinction in finite time, and delayed regularity.

This book gives an account of the present state of research on lattices of elementary substructures and automorphisms of nonstandard models of arithmetic. Major representation theorems are proved, ...
More

This book gives an account of the present state of research on lattices of elementary substructures and automorphisms of nonstandard models of arithmetic. Major representation theorems are proved, and the important particular case of countable recursively saturated models is discussed in detail. All necessary technical tools are developed. The list includes: constructions of elementary simple extensions; a partial classification of arithmetic types, in particular Gaifman's theory of definable types; forcing in arithmetic; elements of the Kirby-Paris combinatorial theory of cuts; Lascar's generic automorphisms; and applications of Abramson and Harrington's generalization of Ramsey's theorem. There are also chapters discussing ω1-like models with interesting second order properties, and a chapter on order types of nonstandard models.Less

The Structure of Models of Peano Arithmetic

Roman KossakJames Schmerl

Published in print: 2006-06-29

This book gives an account of the present state of research on lattices of elementary substructures and automorphisms of nonstandard models of arithmetic. Major representation theorems are proved, and the important particular case of countable recursively saturated models is discussed in detail. All necessary technical tools are developed. The list includes: constructions of elementary simple extensions; a partial classification of arithmetic types, in particular Gaifman's theory of definable types; forcing in arithmetic; elements of the Kirby-Paris combinatorial theory of cuts; Lascar's generic automorphisms; and applications of Abramson and Harrington's generalization of Ramsey's theorem. There are also chapters discussing ω1-like models with interesting second order properties, and a chapter on order types of nonstandard models.

The 1995 work by Wiles and Taylor-Wiles opened up a whole new technique in algebraic number theory and, a decade on, the waves caused by this incredibly important work are still being felt. This book ...
More

The 1995 work by Wiles and Taylor-Wiles opened up a whole new technique in algebraic number theory and, a decade on, the waves caused by this incredibly important work are still being felt. This book describes a generalization of their techniques to Hilbert modular forms (towards the proof of the celebrated ‘R=T’ theorem) and applications of the theorem that have been found. Applications include a proof of the torsion of the adjoint Selmer group (over a totally real field F and over the Iwasawa tower of F) and an explicit formula of the L-invariant of the arithmetic p-adic adjoint L-functions. This implies the torsion of the classical anticyclotomic Iwasawa module of a CM field over the Iwasawa algebra. When specialized to an elliptic Tate curve over F by the L-invariant formula, the invariant of the adjoint square of the curve has exactly the same expression as the one in the conjecture of Mazur-Tate-Teitelbaum (which is for the standard L-function of the elliptic curve and is now a theorem of Greenberg-Stevens).Less

Hilbert Modular Forms and Iwasawa Theory

Haruzo Hida

Published in print: 2006-06-15

The 1995 work by Wiles and Taylor-Wiles opened up a whole new technique in algebraic number theory and, a decade on, the waves caused by this incredibly important work are still being felt. This book describes a generalization of their techniques to Hilbert modular forms (towards the proof of the celebrated ‘R=T’ theorem) and applications of the theorem that have been found. Applications include a proof of the torsion of the adjoint Selmer group (over a totally real field F and over the Iwasawa tower of F) and an explicit formula of the L-invariant of the arithmetic p-adic adjoint L-functions. This implies the torsion of the classical anticyclotomic Iwasawa module of a CM field over the Iwasawa algebra. When specialized to an elliptic Tate curve over F by the L-invariant formula, the invariant of the adjoint square of the curve has exactly the same expression as the one in the conjecture of Mazur-Tate-Teitelbaum (which is for the standard L-function of the elliptic curve and is now a theorem of Greenberg-Stevens).

This book is a text and reference book on Category Theory, a branch of abstract algebra. The book contains clear definitions of the essential concepts, which are illuminated with numerous accessible ...
More

This book is a text and reference book on Category Theory, a branch of abstract algebra. The book contains clear definitions of the essential concepts, which are illuminated with numerous accessible examples. It provides full proofs of all the important propositions and theorems, and aims to make the basic ideas, theorems, and methods of Category Theory understandable. Although it assumes few mathematical pre-requisites, the standard of mathematical rigour is not compromised. The material covered includes the standard core of categories; functors; natural transformations; equivalence; limits and colimits; functor categories; representables; Yoneda's lemma; adjoints; and monads. An extra topic of cartesian closed categories and the lambda-calculus is also provided.Less

Category Theory

Steve Awodey

Published in print: 2006-05-25

This book is a text and reference book on Category Theory, a branch of abstract algebra. The book contains clear definitions of the essential concepts, which are illuminated with numerous accessible examples. It provides full proofs of all the important propositions and theorems, and aims to make the basic ideas, theorems, and methods of Category Theory understandable. Although it assumes few mathematical pre-requisites, the standard of mathematical rigour is not compromised. The material covered includes the standard core of categories; functors; natural transformations; equivalence; limits and colimits; functor categories; representables; Yoneda's lemma; adjoints; and monads. An extra topic of cartesian closed categories and the lambda-calculus is also provided.

This book provides a systematic exposition of the theory of Fourier-Mukai transforms from an algebro-geometric point of view. Assuming a basic knowledge of algebraic geometry, the key aspect of this ...
More

This book provides a systematic exposition of the theory of Fourier-Mukai transforms from an algebro-geometric point of view. Assuming a basic knowledge of algebraic geometry, the key aspect of this book is the derived category of coherent sheaves on a smooth projective variety. The derived category is a subtle invariant of the isomorphism type of a variety, and its group of autoequivalences often shows a rich structure. As it turns out — and this feature is pursued throughout the book — the behaviour of the derived category is determined by the geometric properties of the canonical bundle of the variety. Including notions from other areas, e.g., singular cohomology, Hodge theory, abelian varieties, K3 surfaces; full proofs and exercises are provided. The final chapter summarizes recent research directions, such as connections to orbifolds and the representation theory of finite groups via the McKay correspondence, stability conditions on triangulated categories, and the notion of the derived category of sheaves twisted by a gerbe.Less

Fourier-Mukai Transforms in Algebraic Geometry

D. Huybrechts

Published in print: 2006-04-20

This book provides a systematic exposition of the theory of Fourier-Mukai transforms from an algebro-geometric point of view. Assuming a basic knowledge of algebraic geometry, the key aspect of this book is the derived category of coherent sheaves on a smooth projective variety. The derived category is a subtle invariant of the isomorphism type of a variety, and its group of autoequivalences often shows a rich structure. As it turns out — and this feature is pursued throughout the book — the behaviour of the derived category is determined by the geometric properties of the canonical bundle of the variety. Including notions from other areas, e.g., singular cohomology, Hodge theory, abelian varieties, K3 surfaces; full proofs and exercises are provided. The final chapter summarizes recent research directions, such as connections to orbifolds and the representation theory of finite groups via the McKay correspondence, stability conditions on triangulated categories, and the notion of the derived category of sheaves twisted by a gerbe.

This book provides an introduction to the concept of fixed-parameter tractability. The corresponding design and analysis of efficient fixed-parameter algorithms for optimally solving combinatorially ...
More

This book provides an introduction to the concept of fixed-parameter tractability. The corresponding design and analysis of efficient fixed-parameter algorithms for optimally solving combinatorially explosive (NP-hard) discrete problems is a vividly developing field, with a growing list of applications in various contexts such as network analysis or bioinformatics. The book emphasizes algorithmic techniques over computational complexity theory. It is divided into three parts: a broad introduction that provides the general philosophy and motivation; followed by coverage of algorithmic methods developed over the years in fixed-parameter algorithmics forming the core of the book; and a discussion of the essentials of parameterized hardness theory with a focus on W[1]-hardness which parallels NP-hardness, then stating some relations to polynomial-time approximation algorithms, and finishing up with a list of selected case studies to show the wide range of applicability of the presented methodology.Less

Invitation to Fixed-Parameter Algorithms

Rolf Niedermeier

Published in print: 2006-02-02

This book provides an introduction to the concept of fixed-parameter tractability. The corresponding design and analysis of efficient fixed-parameter algorithms for optimally solving combinatorially explosive (NP-hard) discrete problems is a vividly developing field, with a growing list of applications in various contexts such as network analysis or bioinformatics. The book emphasizes algorithmic techniques over computational complexity theory. It is divided into three parts: a broad introduction that provides the general philosophy and motivation; followed by coverage of algorithmic methods developed over the years in fixed-parameter algorithmics forming the core of the book; and a discussion of the essentials of parameterized hardness theory with a focus on W[1]-hardness which parallels NP-hardness, then stating some relations to polynomial-time approximation algorithms, and finishing up with a list of selected case studies to show the wide range of applicability of the presented methodology.

During the early part of the last century, F. G. Frobenius raised, in his lectures, the following problem (called the Diophantine Frobenius Problem FP): given relatively prime positive integers a1, . ...
More

During the early part of the last century, F. G. Frobenius raised, in his lectures, the following problem (called the Diophantine Frobenius Problem FP): given relatively prime positive integers a1, . . . , an, find the largest natural number (called the Frobenius number and denoted by g(a1, . . . , an)) that is not representable as a nonnegative integer combination of a1, . . . , an. It turned out that the knowledge of g(a1, . . . , an) has been extremely useful to investigate many different problems. A number of methods, from several areas of mathematics, have been used in the hope of finding a formula giving the Frobenius number and algorithms to calculate it. The main intention of this book is to highlight such ‘methods, ideas, viewpoints, and applications’ for as wide an audience as possible. This book aims to provide a comprehensive exposition of what is known today on FP.Less

The Diophantine Frobenius Problem

Jorge L. Ramírez Alfonsín

Published in print: 2005-12-01

During the early part of the last century, F. G. Frobenius raised, in his lectures, the following problem (called the Diophantine Frobenius Problem FP): given relatively prime positive integers a1, . . . , an, find the largest natural number (called the Frobenius number and denoted by g(a1, . . . , an)) that is not representable as a nonnegative integer combination of a1, . . . , an. It turned out that the knowledge of g(a1, . . . , an) has been extremely useful to investigate many different problems. A number of methods, from several areas of mathematics, have been used in the hope of finding a formula giving the Frobenius number and algorithms to calculate it. The main intention of this book is to highlight such ‘methods, ideas, viewpoints, and applications’ for as wide an audience as possible. This book aims to provide a comprehensive exposition of what is known today on FP.

Constructive mathematics is a vital area of research which has gained special attention in recent years due to the distinctive presence of computational content in its theorems. This characteristic ...
More

Constructive mathematics is a vital area of research which has gained special attention in recent years due to the distinctive presence of computational content in its theorems. This characteristic had been already stressed by Bishop in his fundamental contribution to the subject, Foundations of Constructive Analysis (1967). Following Bishop's new approach to mathematics based on intuitionistic logic, various formal systems were introduced in the early 1970s with the intent to clarify the notion of set theory underlying his work. This book addresses the relationship between foundations and practice of constructive mathematics Bishop-style, by presenting on the one hand some very recent contributions to constructive analysis and formal topology, and on the other hand studies which underline the capabilities and expressiveness of various formal systems which have been introduced as foundations for constructive mathematics, like constructive set and type theories. The book aims to provide a point of reference by pesenting up-to-date contributions by some of the most active scholars in each field. A variety of approaches and techniques are represented to give as wide a view as possible and promote cross-fertilization between different styles and traditions. The book also aims at further promoting awareness and discussion on the issue of bridging foundations and practice of constructive mathematics, thus filling the apparent distance that has emerged between them in recent years.Less

From Sets and Types to Topology and Analysis : Towards practicable foundations for constructive mathematics

Published in print: 2005-10-06

Constructive mathematics is a vital area of research which has gained special attention in recent years due to the distinctive presence of computational content in its theorems. This characteristic had been already stressed by Bishop in his fundamental contribution to the subject, Foundations of Constructive Analysis (1967). Following Bishop's new approach to mathematics based on intuitionistic logic, various formal systems were introduced in the early 1970s with the intent to clarify the notion of set theory underlying his work. This book addresses the relationship between foundations and practice of constructive mathematics Bishop-style, by presenting on the one hand some very recent contributions to constructive analysis and formal topology, and on the other hand studies which underline the capabilities and expressiveness of various formal systems which have been introduced as foundations for constructive mathematics, like constructive set and type theories. The book aims to provide a point of reference by pesenting up-to-date contributions by some of the most active scholars in each field. A variety of approaches and techniques are represented to give as wide a view as possible and promote cross-fertilization between different styles and traditions. The book also aims at further promoting awareness and discussion on the issue of bridging foundations and practice of constructive mathematics, thus filling the apparent distance that has emerged between them in recent years.

Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied ...
More

Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied probability. This book contains summaries of the invited talks at a meeting held at the University of Neuchâtel in July 2004 to celebrate David Cox’s 80th birthday. The chapters describe current developments across a wide range of topics, ranging from statistical theory and methods, through applied probability and modelling, to applications in areas including finance, epidemiology, hydrology, medicine, and social science. The book contains chapters by numerous well-known statisticians. It provides a summary of current thinking across a wide front by leading statistical thinkers.Less

Celebrating Statistics : Papers in honour of Sir David Cox on his 80th birthday

Published in print: 2005-09-22

Sir David Cox is among the most important statisticians of the past half-century, making pioneering and highly influential contributions to a wide range of topics in statistics and applied probability. This book contains summaries of the invited talks at a meeting held at the University of Neuchâtel in July 2004 to celebrate David Cox’s 80th birthday. The chapters describe current developments across a wide range of topics, ranging from statistical theory and methods, through applied probability and modelling, to applications in areas including finance, epidemiology, hydrology, medicine, and social science. The book contains chapters by numerous well-known statisticians. It provides a summary of current thinking across a wide front by leading statistical thinkers.

The basic goal of an inverse eigenvalue problem is to reconstruct the physical parameters of a certain system from the knowledge or desire of its dynamical behavior. Depending on the application, ...
More

The basic goal of an inverse eigenvalue problem is to reconstruct the physical parameters of a certain system from the knowledge or desire of its dynamical behavior. Depending on the application, inverse eigenvalue problems appear in many different forms. This book discusses the fundamental questions, some known results, many applications, mathematical properties, a variety of numerical techniques, as well as several open problems.Less

Inverse Eigenvalue Problems : Theory, Algorithms, and Applications

Moody ChuGene Golub

Published in print: 2005-06-16

The basic goal of an inverse eigenvalue problem is to reconstruct the physical parameters of a certain system from the knowledge or desire of its dynamical behavior. Depending on the application, inverse eigenvalue problems appear in many different forms. This book discusses the fundamental questions, some known results, many applications, mathematical properties, a variety of numerical techniques, as well as several open problems.

Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complex-geometry computational domains has historically been much more ...
More

Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complex-geometry computational domains has historically been much more limited. More recently, the need to find accurate solutions to the viscous flow equations around complex configurations has led to the development of high-order discretization procedures on unstructured meshes, which are also recognized as more efficient for solution of time-dependent oscillatory solutions over long time periods. This book, an updated edition on the original text, presents the recent and significant progress in multi-domain spectral methods at both the fundamental and application level. Containing material on discontinuous Galerkin methods, non-tensorial nodal spectral element methods in simplex domains, and stabilization and filtering techniques, this text introduces the use of spectral/hp element methods with particular emphasis on their application to unstructured meshes. It provides a detailed explanation of the key concepts underlying the methods along with practical examples of their derivation and application.Less

Spectral/hp Element Methods for Computational Fluid Dynamics

George KarniadakisSpencer Sherwin

Published in print: 2005-06-02

Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complex-geometry computational domains has historically been much more limited. More recently, the need to find accurate solutions to the viscous flow equations around complex configurations has led to the development of high-order discretization procedures on unstructured meshes, which are also recognized as more efficient for solution of time-dependent oscillatory solutions over long time periods. This book, an updated edition on the original text, presents the recent and significant progress in multi-domain spectral methods at both the fundamental and application level. Containing material on discontinuous Galerkin methods, non-tensorial nodal spectral element methods in simplex domains, and stabilization and filtering techniques, this text introduces the use of spectral/hp element methods with particular emphasis on their application to unstructured meshes. It provides a detailed explanation of the key concepts underlying the methods along with practical examples of their derivation and application.

This is the third edition of a well-known graduate textbook on Boolean-valued models of set theory. The aim of the first and second editions was to provide a systematic and adequately motivated ...
More

This is the third edition of a well-known graduate textbook on Boolean-valued models of set theory. The aim of the first and second editions was to provide a systematic and adequately motivated exposition of the theory of Boolean-valued models as developed by Scott and Solovay in the 1960s, deriving along the way the central set theoretic independence proofs of Cohen and others in the particularly elegant form that the Boolean-valued approach enables them to assume. In this edition, the background material has been augmented to include an introduction to Heyting algebras. It includes chapters on Boolean-valued analysis and Heyting-algebra-valued models of intuitionistic set theory.Less

Set Theory : Boolean-Valued Models and Independence Proofs

John L. Bell

Published in print: 2005-05-12

This is the third edition of a well-known graduate textbook on Boolean-valued models of set theory. The aim of the first and second editions was to provide a systematic and adequately motivated exposition of the theory of Boolean-valued models as developed by Scott and Solovay in the 1960s, deriving along the way the central set theoretic independence proofs of Cohen and others in the particularly elegant form that the Boolean-valued approach enables them to assume. In this edition, the background material has been augmented to include an introduction to Heyting algebras. It includes chapters on Boolean-valued analysis and Heyting-algebra-valued models of intuitionistic set theory.

This book focuses on interpolation and definability. This notion is not only central in pure logic, but has significant meaning and applicability in all areas where logic itself is applied, ...
More

This book focuses on interpolation and definability. This notion is not only central in pure logic, but has significant meaning and applicability in all areas where logic itself is applied, especially in computer science, artificial intelligence, logic programming, philosophy of science, and natural language. The book provides basic knowledge on interpolation and definability in logic, and contains a systematic account of material which has been presented in many papers. A variety of methods and results are presented beginning with the famous Beth's and Craig's theorems in classical predicate logic (1953-57), and to the most valuable achievements in non-classical topics on logic, mainly intuitionistic and modal logic. Together with semantical and proof-theoretic methods, close interrelations between logic and universal algebra are established and exploited.Less

Interpolation and Definability : Modal and Intuitionistic Logics

Dov M. GabbayLarisa Maksimova

Published in print: 2005-05-12

This book focuses on interpolation and definability. This notion is not only central in pure logic, but has significant meaning and applicability in all areas where logic itself is applied, especially in computer science, artificial intelligence, logic programming, philosophy of science, and natural language. The book provides basic knowledge on interpolation and definability in logic, and contains a systematic account of material which has been presented in many papers. A variety of methods and results are presented beginning with the famous Beth's and Craig's theorems in classical predicate logic (1953-57), and to the most valuable achievements in non-classical topics on logic, mainly intuitionistic and modal logic. Together with semantical and proof-theoretic methods, close interrelations between logic and universal algebra are established and exploited.

The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was ...
More

The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer — the idea of controlling a computing machine's operations by means of coded instructions, stored in the machine's ‘memory’. In 1945, Turing drew up his revolutionary design for an electronic computing machine — his Automatic Computing Engine (‘ACE’). A pilot model of the ACE ran its first programme in 1950 and the production version, the ‘DEUCE’, went on to become a cornerstone of the fledgling British computer industry. The first ‘personal’ computer was based on Turing's ACE. This book describes Turing's struggle to build the modern computer. It contains first-hand accounts by Turing and by the pioneers of computing who worked with him. The book describes the hardware and software of the ACE and contains chapters describing Turing's path-breaking research in the fields of Artificial Intelligence (AI) and Artificial Life (A-Life).Less

Published in print: 2005-04-14

The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer — the idea of controlling a computing machine's operations by means of coded instructions, stored in the machine's ‘memory’. In 1945, Turing drew up his revolutionary design for an electronic computing machine — his Automatic Computing Engine (‘ACE’). A pilot model of the ACE ran its first programme in 1950 and the production version, the ‘DEUCE’, went on to become a cornerstone of the fledgling British computer industry. The first ‘personal’ computer was based on Turing's ACE. This book describes Turing's struggle to build the modern computer. It contains first-hand accounts by Turing and by the pioneers of computing who worked with him. The book describes the hardware and software of the ACE and contains chapters describing Turing's path-breaking research in the fields of Artificial Intelligence (AI) and Artificial Life (A-Life).

The book deals with the numerical solution of structured Markov chains which include M/G/1 and G/M/1-type Markov chains, QBD processes, non-skip-free queues, and tree-like stochastic processes and ...
More

The book deals with the numerical solution of structured Markov chains which include M/G/1 and G/M/1-type Markov chains, QBD processes, non-skip-free queues, and tree-like stochastic processes and has a wide applicability in queueing theory and stochastic modeling. It presents in a unified language the most up to date algorithms, which are so far scattered in diverse papers, written with different languages and notation. It contains a thorough treatment of numerical algorithms to solve these problems, from the simplest to the most advanced and most efficient. Nonlinear matrix equations are at the heart of the analysis of structured Markov chains, they are analysed both from the theoretical, from the probabilistic, and from the computational point of view. The set of methods for solution contains functional iterations, doubling methods, logarithmic reduction, cyclic reduction, and subspace iteration, all are described and analysed in detail. They are also adapted to interesting specific queueing models coming from applications. The book also offers a comprehensive and self-contained treatment of the structured matrix tools which are at the basis of the fastest algorithmic techniques for structured Markov chains. Results about Toeplitz matrices, displacement operators, and Wiener-Hopf factorizations are reported to the extent that they are useful for the numerical treatment of Markov chains. Every and all solution methods are reported in detailed algorithmic form so that they can be coded in a high-level language with minimum effort.Less

Numerical Methods for Structured Markov Chains

Dario A. BiniGuy LatoucheBeatrice Meini

Published in print: 2005-02-03

The book deals with the numerical solution of structured Markov chains which include M/G/1 and G/M/1-type Markov chains, QBD processes, non-skip-free queues, and tree-like stochastic processes and has a wide applicability in queueing theory and stochastic modeling. It presents in a unified language the most up to date algorithms, which are so far scattered in diverse papers, written with different languages and notation. It contains a thorough treatment of numerical algorithms to solve these problems, from the simplest to the most advanced and most efficient. Nonlinear matrix equations are at the heart of the analysis of structured Markov chains, they are analysed both from the theoretical, from the probabilistic, and from the computational point of view. The set of methods for solution contains functional iterations, doubling methods, logarithmic reduction, cyclic reduction, and subspace iteration, all are described and analysed in detail. They are also adapted to interesting specific queueing models coming from applications. The book also offers a comprehensive and self-contained treatment of the structured matrix tools which are at the basis of the fastest algorithmic techniques for structured Markov chains. Results about Toeplitz matrices, displacement operators, and Wiener-Hopf factorizations are reported to the extent that they are useful for the numerical treatment of Markov chains. Every and all solution methods are reported in detailed algorithmic form so that they can be coded in a high-level language with minimum effort.

This is the second book of a six volume edition of the complete correspondence of one of the leading figures in the scientific revolution of the 17th century, the Oxford mathematician and theologian ...
More

This is the second book of a six volume edition of the complete correspondence of one of the leading figures in the scientific revolution of the 17th century, the Oxford mathematician and theologian John Wallis (1616–1703). It covers the period 1660 to September 1668 and thus some of the most decisive years of political and scientific reorganization in England during that century. The volume begins shortly before the restoration of the monarchy in 1660 and witnesses the emergence of the Royal Society from scientific circles, which had existed earlier in London and Oxford. Wallis's involvement in the Royal Society stretches back to its beginnings. After its official establishment, he became one of its most active members, corresponding regularly with its secretary Henry Oldenburg and attending meetings whenever he was in London. Wallis contributed extensively to contemporary scientific debate both in England and on the continent, and many of his letters to Oldenburg on mathematical and physical topics were edited and published in the journal Philosophical Transactions to this purpose. The correspondence contained in the volume, much of which is previously unpublished, throws new light on the background to the scientific revolution and on university politics during this time. As Keeper of the Archives, Wallis was often called upon to prepare papers aimed at defending the University of Oxford's ancient rights and privileges, and was also required to spend a considerable amount of his time in London. To this extent, at least his university commitments and scientific interests were able to go hand-in-hand.Less

The Correspondence of John Wallis (1616–1703) : Volume II (1660 – September 1668)

Philip BeeleyChristoph Scriba

Published in print: 2005-01-13

This is the second book of a six volume edition of the complete correspondence of one of the leading figures in the scientific revolution of the 17th century, the Oxford mathematician and theologian John Wallis (1616–1703). It covers the period 1660 to September 1668 and thus some of the most decisive years of political and scientific reorganization in England during that century. The volume begins shortly before the restoration of the monarchy in 1660 and witnesses the emergence of the Royal Society from scientific circles, which had existed earlier in London and Oxford. Wallis's involvement in the Royal Society stretches back to its beginnings. After its official establishment, he became one of its most active members, corresponding regularly with its secretary Henry Oldenburg and attending meetings whenever he was in London. Wallis contributed extensively to contemporary scientific debate both in England and on the continent, and many of his letters to Oldenburg on mathematical and physical topics were edited and published in the journal Philosophical Transactions to this purpose. The correspondence contained in the volume, much of which is previously unpublished, throws new light on the background to the scientific revolution and on university politics during this time. As Keeper of the Archives, Wallis was often called upon to prepare papers aimed at defending the University of Oxford's ancient rights and privileges, and was also required to spend a considerable amount of his time in London. To this extent, at least his university commitments and scientific interests were able to go hand-in-hand.

This book provides an introduction to, and analysis of, the use of Bayesian nets in causal modelling. It puts forward new conceptual foundations for causal network modelling: The book argues that ...
More

This book provides an introduction to, and analysis of, the use of Bayesian nets in causal modelling. It puts forward new conceptual foundations for causal network modelling: The book argues that probability and causality need to be interpreted as epistemic notions in order for the key assumptions behind causal models to hold. Under the epistemic view, probability and causality are understood in terms of the beliefs an agent ought to adopt. The book develops an objective Bayesian notion of probability and a corresponding epistemic theory of causality. This yields a general framework for causal modelling, which is extended to cope with recursive causal relations, logically complex beliefs and changes in an agent's language.Less

Jon Williamson

Published in print: 2004-12-23

This book provides an introduction to, and analysis of, the use of Bayesian nets in causal modelling. It puts forward new conceptual foundations for causal network modelling: The book argues that probability and causality need to be interpreted as epistemic notions in order for the key assumptions behind causal models to hold. Under the epistemic view, probability and causality are understood in terms of the beliefs an agent ought to adopt. The book develops an objective Bayesian notion of probability and a corresponding epistemic theory of causality. This yields a general framework for causal modelling, which is extended to cope with recursive causal relations, logically complex beliefs and changes in an agent's language.

The mathematician John Pell was a member of the Royal Society and one of the generation of scientists that included Boyle, Wren, and Hooke. Although he left a huge body of manuscript materials, he ...
More

The mathematician John Pell was a member of the Royal Society and one of the generation of scientists that included Boyle, Wren, and Hooke. Although he left a huge body of manuscript materials, he has remained a neglected figure, whose papers have never been properly explored. This book is a full-length study of Pell and presents an in-depth account of his life and mathematical thinking based on a detailed study of his manuscripts. It also brings to life a strange, appealing, but awkward character, whose failure to publish his discoveries was caused by powerful scruples. In addition, this book shows that the range of Pell's interests extended far beyond mathematics. He was a key member of the circle of the ‘intelligencer’ Samuel Hartlib; he prepared translations of works by Descartes and Comenius; in the 1650s he served as Cromwell's envoy to Switzerland; and in the last part of his life he was an active member of the Royal Society, interested in the whole range of its activities. The study of Pell's life and thought thus illuminates many different aspects of 17th-century intellectual life. The book is in three parts. The first is a detailed biography of Pell; the second is an extended essay on his mathematical work; the third is a richly annotated edition of his correspondence with Sir Charles Cavendish. This correspondence, which has often been cited by scholars but has never been published in full, is concerned not only with mathematics but also with optics, philosophy, and many other subjects. Conducted mainly while Pell was in the Netherlands and Cavendish was also on the Continent, it is a fascinating example of the correspondence that flourished in the 17th-century ‘Republic of Letters’.Less

John Pell (1611-1685) and His Correspondence with Sir Charles Cavendish : The Mental World of an Early Modern Mathematician

Noel MalcolmJacqueline Stedall

Published in print: 2004-11-25

The mathematician John Pell was a member of the Royal Society and one of the generation of scientists that included Boyle, Wren, and Hooke. Although he left a huge body of manuscript materials, he has remained a neglected figure, whose papers have never been properly explored. This book is a full-length study of Pell and presents an in-depth account of his life and mathematical thinking based on a detailed study of his manuscripts. It also brings to life a strange, appealing, but awkward character, whose failure to publish his discoveries was caused by powerful scruples. In addition, this book shows that the range of Pell's interests extended far beyond mathematics. He was a key member of the circle of the ‘intelligencer’ Samuel Hartlib; he prepared translations of works by Descartes and Comenius; in the 1650s he served as Cromwell's envoy to Switzerland; and in the last part of his life he was an active member of the Royal Society, interested in the whole range of its activities. The study of Pell's life and thought thus illuminates many different aspects of 17th-century intellectual life. The book is in three parts. The first is a detailed biography of Pell; the second is an extended essay on his mathematical work; the third is a richly annotated edition of his correspondence with Sir Charles Cavendish. This correspondence, which has often been cited by scholars but has never been published in full, is concerned not only with mathematics but also with optics, philosophy, and many other subjects. Conducted mainly while Pell was in the Netherlands and Cavendish was also on the Continent, it is a fascinating example of the correspondence that flourished in the 17th-century ‘Republic of Letters’.

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy).date: 24 May 2018