Sample records for general evolutionary theory

We propose an evolutionary framework, the barrier theory of cancer, which is based on the distinction between barriers to oncogenesis and restraints. Barriers are defined as mechanisms that prevent oncogenesis. Restraints, which are more numerous, inhibit but do not prevent oncogenesis. Processes that compromise barriers are essential causes of cancer; those that interfere with restraints are exacerbating causes. The barrier theory is built upon the three evolutionary processes involved in oncogenesis: natural selection acting on multicellular organisms to mold barriers and restraints, natural selection acting on infectious organisms to abrogate these protective mechanisms, and oncogenic selection which is responsible for the evolution of normal cells into cancerous cells. The barrier theory is presented as a first step toward the development of a generalevolutionarytheory of cancer. Its attributes and implications for intervention are compared with those of other major conceptual frameworks for understanding cancer: the clonal diversification model, the stem cell theory and the hallmarks of cancer. The barrier theory emphasizes the practical value of distinguishing between essential and exacerbating causes. It also stresses the importance of determining the scope of infectious causation of cancer, because individual pathogens can be responsible for multiple essential causes in infected cells.

Despite advances in fields like genetics, evolutionary psychology, and human behavior and evolution--which generally focus on individual or small group behavior from a biological perspective--evolutionary biology has made little impact on studies of political change and social history. Theories of natural selection often seem inapplicable to human history because our social behavior is embedded in language (which makes possible the concepts of time and social identity on which what we call "history" depends). Peter Corning's Holistic Darwinism reconceptualizes evolutionary biology, making it possible to go beyond the barriers separating the social and natural sciences. Corning focuses on two primary processes: "synergy" (complex multivariate interactions at multiple levels between a species and its environment) and "cybernetics" (the information systems permitting communication between individuals and groups over time). Combining this frame of reference with inclusive fitness theory, it is possible to answer the most important (and puzzling) question in human history: How did a species that lived for millennia in hunter-gatherer bands form centralized states governing large populations of non-kin (including multi-ethnic empires as well as modern nation-states)? The fragility and contemporary ethnic violence in Kenya and the Congo should suffice as evidence that these issues need to be taken seriously. To explain the rise and fall of states as well as changes in human laws and customs--the core of historical research--it is essential to show how the provision of collective goods can overcome the challenge of self-interest and free-riding in some instances, yet fail to do so in others. To this end, it is now possible to consider how a state providing public goods can--under circumstances that often include effective leadership--contribute to enhanced inclusive fitness of virtually all its members. Because social behavior needs to adapt to ecology, but ecological

Full Text Available Economic agents are not always rational or farsighted and can make decisions according to simple behavioral rules that vary according to situation and can be studied using the tools of evolutionary game theory. Furthermore, such behavioral rules are themselves subject to evolutionary forces. Paying particular attention to the work of young researchers, this essay surveys the progress made over the last decade towards understanding these phenomena, and discusses open research topics of importance to economics and the broader social sciences.

Most evolutionary thinking is based on the notion of fitness and related ideas such as fitness landscapes and evolutionary optima. Nevertheless, it is often unclear what fitness actually is, and its meaning often depends on the context. Here we argue that fitness should not be a basal ingredient in verbal or mathematical descriptions of evolution. Instead, we propose that evolutionary birth-death processes, in which individuals give birth and die at ever-changing rates, should be the basis of evolutionarytheory, because such processes capture the fundamental events that generate evolutionary dynamics. In evolutionary birth-death processes, fitness is at best a derived quantity, and owing to the potential complexity of such processes, there is no guarantee that there is a simple scalar, such as fitness, that would describe long-term evolutionary outcomes. We discuss how evolutionary birth-death processes can provide useful perspectives on a number of central issues in evolution.

Scientists observe nature, search for generalizations, and provide explanations for why the world is as it is. Generalizations are of two kinds. The first are descriptive and inductive, such as Boyle's Law. They are derived from observations and therefore refer to observables (in this case, pressure and volume). The second are often imaginative and form the axioms of a deductive theory, such as Newton's Laws of Motion. They often refer to unobservables (e.g. inertia and gravitation). Biology has many inductive generalizations (e.g. Bergmann's Rule and 'all cells arise from preexisting cells') but few, if any, recognized universal laws and virtually no deductive theory. Many biologists and philosophers of biology have agreed that predictive theory is inappropriate in biology, which is said to be more complex than physics, and that one can have nonpredictive explanations, such as the neo-Darwinian Theory of Evolution by Natural Selection. Other philosophers dismiss nonpredictive, explanatory theories, including evolutionary 'theory', as metaphysics. Most biologists do not think of themselves as philosophers or give much thought to the philosophical basis of their research. Nevertheless, their philosophy shows in the way they do research. The plethora of ad hoc (i.e. not universal) hypotheses indicates that biologists are reluctant inductivists in that the search for generalization does not have a high priority. Biologists test their hypotheses by verification. Theoretical physicists, in contrast, are deductive unifiers and test their explanatory hypotheses by falsification. I argue that theoretical biology (concerned with unobservables, such as fitness and natural selection) is not scientific because it lacks universal laws and predictive theory. In order to make this argument, I review the differences between verificationism and falsificationism, induction and deduction, and descriptive and explanatory laws. I show how these differ with a specific example of a

Evolutionary game theory applied to two interacting cell populations can yield quantitative prediction of the future densities of the two cell populations based on the initial interaction terms. We will discuss how in a complex ecology that evolutionary game theory successfully predicts the future densities of strains of stromal and cancer cells (multiple myeloma), and discuss the possible clinical use of such analysis for predicting cancer progression. Supported by the National Science Foundation and the National Cancer Institute.

This dissertation is devoted to the problem of behavior design, which is a generalization of the standard global optimization problem: instead of generating the optimizer, the generalization produces, on the space of candidate optimizers, a probability density function referred to as the behavior. The generalization depends on a parameter, the level of selectivity, such that as this parameter tends to infinity, the behavior becomes a delta function at the location of the global optimizer. The motivation for this generalization is that traditional off-line global optimization is non-resilient and non-opportunistic. That is, traditional global optimization is unresponsive to perturbations of the objective function. On-line optimization methods that are more resilient and opportunistic than their off-line counterparts typically consist of the computationally expensive sequential repetition of off-line techniques. A novel approach to inexpensive resilience and opportunism is to utilize the theory of Selective Evolutionary Generation Systems (SECS), which sequentially and probabilistically selects a candidate optimizer based on the ratio of the fitness values of two candidates and the level of selectivity. Using time-homogeneous, irreducible, ergodic Markov chains to model a sequence of local, and hence inexpensive, dynamic transitions, this dissertation proves that such transitions result in behavior that is called rational; such behavior is desirable because it can lead to both efficient search for an optimizer as well as resilient and opportunistic behavior. The dissertation also identifies system-theoretic properties of the proposed scheme, including equilibria, their stability and their optimality. Moreover, this dissertation demonstrates that the canonical genetic algorithm with fitness proportional selection and the (1+1) evolutionary strategy are particular cases of the scheme. Applications in three areas illustrate the versatility of the SECS theory: flight

that great work of art are also automatically fitness-enhancing in the present day environment, at that there are simple correllations between whether a work of art has a high aesthetic value and whether it is fitness-enhancing or not. Keywords : Evolutionary aesthetics, film theory, literary theory......The article is an invited response to a target article by Joseph Carroll entitled "An evolutionary paradigm for literary study". It argues that the target article misuse the fact that works of art are based on adaptations that were fitness-enhancing in the era of evolutionary adaptations to claim...

Full Text Available Evolutionarytheory study of processes that transform economy for firms, institutions, industries, employment, production, trade and growth within, through the actions of diverse agents from experience and interactions, using evolutionary methodology. Evolutionarytheory analyses the unleashing of a process of technological and institutional innovation by generating and testing a diversity of ideas which discover and accumulate more survival value for the costs incurred than competing alternatives.This paper presents study the behavior of the firms on the market used the evolutionarytheory.The paper is to present in full the developments that have led to the re-assessment of theories of firms starting from the criticism on Coase's theory based on the lack of testable hypotheses and on non-operative definition of transaction costs. In the literature in the field studies on firms were allotted a secondary place for a long period of time, to date the new theories of the firm hold a dominant place in the firms’ economic analysis. In an article, published in 1937, Ronald H. Coase identified the main sources of the cost of using the market mechanism. The firms theory represent a issue intensively studied in the literature in the field, regarding the survival, competitiveness and innovation of firm on the market. The research of Nelson and Winter, “An EvolutionaryTheory of Economic Change” (1982 is the starting point for a modern literature in the field which considers the approach of the theory of the firm from an evolutionary perspective. Nelson and Winter have shown that the “orthodox” theory, is objectionable primarily by the fact that the hypothesis regarding profit maximization has a normative character and is not valid in any situation. Nelson and Winter reconsidered their microeconomic analysis showing that excessive attention should not be paid to market equilibrium but rather to dynamic processes resulting from irreversible

Comments on the article Leadership, followership, and evolution: Some lessons from the past by Van Vugt, Hogan, and Kaiser. This article offers a fresh perspective on leaders, followers, and their possible origins in nonhuman and primitive human behavior patterns. The connections between group coordination, leadership, and game theory have some…

In two papers we review game theory applications in biology below the level of cognitive living beings. It can be seen that evolution and natural selection replace the rationality of the actors appropriately. Even in these micro worlds, competing situations and cooperative relationships can be found and modeled by evolutionary game theory. Also those units of the lowest levels of life show different strategies for different environmental situations or different partners. We give a wide overview of evolutionary game theory applications to microscopic units. In this first review situations on the cellular level are tackled. In particular metabolic problems are discussed, such as ATP-producing pathways, secretion of public goods and cross-feeding. Further topics are cyclic competition among more than two partners, intra- and inter-cellular signalling, the struggle between pathogens and the immune system, and the interactions of cancer cells. Moreover, we introduce the theoretical basics to encourage scientists to investigate problems in cell biology and molecular biology by evolutionary game theory.

The authors review psychology's historical, competing perspectives on human motivation and propose a new comprehensive theory. The new theory is based on evolutionary principles as proposed by C. Darwin (1859) and modified by W. D. Hamilton (1964, 1996), R. L. Trivers (1971, 1972), and R. Dawkins (1989). The theory unifies biological, behavioral, and cognitive approaches to motivation. The theory is neuropsychological and addresses conscious and nonconscious processes that underlie motivation, emotion, and self-control. The theory predicts a hierarchical structure of motives that are measurable as individual differences in human behavior. These motives are related to social problem domains (D. B. Bugental, 2000; D. T. Kenrick, N. P. Li, & J. Butner, 2003), and each is hypothesized to solve a particular problem of human inclusive fitness.

These Notes grew from my research in evolutionary biology, specifically on the theory of evolutionarily stable strategies (ESS theory), over the past ten years. Personally, evolutionary game theory has given me the opportunity to transfer my enthusiasm for abstract mathematics to more practical pursuits. I was fortunate to have entered this field in its infancy when many biologists recognized its potential but were not prepared to grant it general acceptance. This is no longer the case. ESS theory is now a rapidly expanding (in both applied and theoretical directions) force that no evolutionary biologist can afford to ignore. Perhaps, to continue the life-cycle metaphor, ESS theory is now in its late adolescence and displays much of the optimism and exuberance of this exciting age. There are dangers in writing a text about a theory at this stage of development. A comprehensive treatment would involve too many loose ends for the reader to appreciate the central message. On the other hand, the current central m...

A new class of generally covariant gauge theories in four space-time dimensions is investigated. The field variables are taken to be a Lie algebra valued connection 1-form and a scalar density. Modulo an important degeneracy, complex [euclidean] vacuum general relativity corresponds to a special case in this class. A canonical analysis of the generally covariant gauge theories with the same gauge group as general relativity shows that they describe two degrees of freedom per space point, qualifying therefore as a new set of neighbors of general relativity. The modification of the algebra of the constraints with respect to the general relativity case is computed; this is used in addressing the question of how general relativity stands out from its neighbors. (orig.)

A generalized etale cohomology theory is a theory which is represented by a presheaf of spectra on an etale site for an algebraic variety, in analogy with the way an ordinary spectrum represents a cohomology theory for spaces. Examples include etale cohomology and etale K-theory. This book gives new and complete proofs of both Thomason's descent theorem for Bott periodic K-theory and the Nisnevich descent theorem. In doing so, it exposes most of the major ideas of the homotopy theory of presheaves of spectra, and generalized etale homology theories in particular. The treatment includes, for the purpose of adequately dealing with cup product structures, a development of stable homotopy theory for n-fold spectra, which is then promoted to the level of presheaves of n-fold spectra. This book should be of interest to all researchers working in fields related to algebraic K-theory. The techniques presented here are essentially combinatorial, and hence algebraic. An extensive background in traditional stable hom...

The Generalized Chiral Perturbation Theory enlarges the framework of the standard χPT (Chiral Perturbation Theory), relaxing certain assumptions which do not necessarily follow from QCD or from experiment, and which are crucial for the usual formulation of the low energy expansion. In this way, experimental tests of the foundations of the standard χPT become possible. Emphasis is put on physical aspects rather than on formal developments of GχPT. (author). 31 refs

Various attempts to formulate the fundamental physical interactions in the framework of unified geometric theories have recently gained considerable success (Kaluza, 1921; Klein, 1926; Trautmann, 1970; Cho, 1975). Symmetries of the spacetime and so-called internal spaces seem to play a key role in investigating both the fundamental interactions and the abundance of elementary particles. The author presents a category-theoretic description of a generalization of the G-theory concept and its application to geometric compactification and dimensional reduction. The main reasons for using categories and functors as tools are the clearness and the level of generalization one can obtain

Full Text Available Developmental biology and evolutionary biology are both mature integrative disciplines which started in the 19th century and then followed parallel and independent scientific pathways. Recently, a genetical component has stepped into both disciplines (developmental genetics and evolutionary genetics pointing out the need for future convergent maturation. Indeed, the Evo-Devo approach is becoming popular among developmental biologists, based on the facts that distant groups share a common ancestry, that precise phylogenies can be worked out and that homologous genes often play similar roles during the development of very different organisms. In this essay, I try to show that the real future of Evo-Devo thinking is still broader. The evolutionarytheory is a set of diverse concepts which can and should be used in any biological field. Evolutionary thinking trains to ask « why » questions and to provide logical and plausible answers. It can shed some light on a diversity of general problems such as how to distinguish homologies from analogies, the costs and benefits of multicellularity, the origin of novel structures (e.g. the head, or the evolution of sexual reproduction. In the next decade, we may expect a progressive convergence between developmental genetics and quantitative genetics.

The recent neo-Schumpeterian and evolutionary economics appears to cover a much smaller range of topics than Joseph Schumpeter confronted. Thus, it has hardly been recognised that Schumpeter wanted to develop a generaltheory that served the analysis of evolution in any sector of social life...

Burkart et al. contend that general intelligence poses a major evolutionary puzzle. This assertion presupposes a reification of general intelligence - that is, assuming that it is one "thing" that must have been selected as such. However, viewing general intelligence as an emerging property of multiple cognitive abilities (each with their own selective advantage) requires no additional evolutionary explanation.

Abstract: In this paper, we construct a mathematical model that applies tools from evolutionary game theory to issues in organizational ecology. Evolutionary game theory shares the key feature of mathematical rigor with the industrial organization tradition, but is similar to organizational ecology by emphasizing evolutionary dynamics. Evolutionary game theory may well be a complementary modeling tool for the analytical study of organizational ecology issues, next to formal logic, standard ga...

Evolutionary games have considerable unrealized potential for modeling substantive economic issues. They promise richer predictions than orthodox game models but often require more extensive specifications. This paper exposits the specification of evolutionary game models and classifies the possible asymptotic behavior for one and two dimensional models.

This paper suggests that the analysis of Schumpeterian competition within the Nelson-Winter model should be complemented with evolutionary game theory. This model and its limitations for density-dependent Schumpeterian strategies are presented in terms of the equations of evolutionary dynamics. F...

Kanazawa (2008), Templer (2008), and Templer and Arikawa (2006) claimed to have found empirical support for evolutionarytheories of race differences in intelligence by correlating estimates of national IQ with indicators of reproductive strategies, temperature, and geographic distance from Africa.

Several recent books have claimed to integrate literary study with evolutionary biology. All of the books here considered, except Robert Storey's, adopt conceptions of evolutionarytheory that are in some way marginal to the Darwinian adaptationist program. All the works attempt to connect evolutionary study with various other disciplines or methodologies: for example, with cultural anthropology, cognitive psychology, the psychology of emotion, neurobiology, chaos theory, or structuralist linguistics. No empirical paradigm has yet been established for this field, but important steps have been taken, especially by Storey, in formulating basic principles, identifying appropriate disciplinary connections, and marking out lines of inquiry. Reciprocal efforts are needed from biologists and social scientists.

This study attempts to examine evolutionarytheory and creationism objectively without engaging in an apology for or a criticism of either. It compares the presuppositions and assumptions of both systems, and examines the role of faith in religion and in the scientific theory of evolution. After discussing the nature of the scientific method and the development of the theory of evolution, the study explores the dichotomy of faith and reason, the ways in which these operate in theories of int...

The main steps in plotting the current gravitation theory and some prospects of its subsequent development are reviewed. The attention is concentrated on a comparison of the relativistic gravitational field with other physical fields. Two equivalent formulations of the general relativity (GR) - geometrical and field-theoretical - are considered in detail. It is shown that some theories of gravity constructed as the field theories at a flat background space-time are in fact just different formulations of GR and not alternative theories

At the heart of evolutionarytheory is the concept of 'fitness', which is, standardly, an organism's reproductive success. Many evolutionary theorists argue, however, that to explain the evolution of social traits, such as altruism, we must use a different notion of fitness. This 'inclusive fitness', which includes the reproductive success of relatives, is seen as indispensable for studying social evolution. Recently, however, both biologists and philosophers have critically scrutinized its s...

Full Text Available The neoclassical theory of the firm deals with the pattern of perfect competition, within which the perfect information available to economic agents provides instant allocation of production factors and access to economic goods. The Austrian School (C. Menger, L. von Mises, Hayek, etc. supported the idea of minimal state intervention on the markets, bringing important conceptual developments on the theory of the firm. Hirschleifer (1982 put forward the model of social and institutional functioning, arguing that the game theory is able to predict the outcome of the collective behavior and the human characteristics necessary for building the respective institutions.The evolutionarytheory provides the firm and the entrepreneur the recognition of the functions of innovation, of generating and exploiting information and of organizing and coordinating production. The evolutionary perspective of the firm assumes the existence of a body of knowledge that is acquired through and builds up the organizational memory, subsequently found in routines, all choices being made based on these routines (Nelson and Winter, 2002. The evolution of the firm is considered to be similar to natural selection, but unlike the classic market selection, the evolutionists suggest the existence of a plurality of selection media. The present research is structured as follows: a brief introduction into the theories of the firm, the second part of the paper analyzes the theories of the firm from an institutional, neo-institutional and evolutionary perspective. In the third part of the paper the evolutionary games are described and analyzed from the evolutionary perspective of the firm. The last part of the paper represents a study of the “hawk-dove” game dynamic replicator. The final conclusions of the paper show that the evolutionarytheory brings valuable contributions to the foundation of explanations regarding economic phenomena, indicating new directions for advanced

The theory of evolution continues to be a bone of contention among certain groups of theistic believers. More often than not, debate about it is conducted in terms of firm blanket statements, with one side maintaining that evolution is just a proven fact and the other claiming that it is a theory in

The theory of evolution continues to be a bone of contention among certain groups of theistic believers. More often than not, debate about it is conducted in terms of firm blanket statements, with one side maintaining that evolution is just a proven fact and the other claiming that it is a theory in crisis. This paper aims to bring some light to this heated debate by doing two things. First, we distinguish between three different ‘layers’ within the theory of evolution: Historical Evolution, ...

This study tests two evolutionary hypotheses on grandparental investments differentiated by the child's sex: the paternity uncertainty hypothesis and the Trivers-Willard hypothesis. Data are from two culturally different countries: the Dutch Longitudinal Aging Study Amsterdam (n=2375) and the Chinese Anhui Survey (n=4026). In the Netherlands, grandparental investments are biased towards daughters' children, which is in accordance with the paternity uncertainty hypothesis. But in China, grandparental investments are biased towards sons' children, which is in conflict with the paternity uncertainty hypothesis. This study found no support for the Trivers-Willard hypothesis. These results raise doubts over the relevance of paternity uncertainty as an explanation of a grandparental investment bias towards daughters' children that is often found in Western populations. The results suggest that discriminative grandparental investments are better understood as the outcome of cultural prescriptions and economic motives.

Full Text Available Has the emergence of evolutionary psychology had an increasing impact on personality and social psychological research published over the past two decades? If so, is its growing influence substantially different from that of other emerging psychological areas? These questions were addressed in the present study by conducting a content analysis of the Journal of Personality and Social Psychology (JPSP from 1985 to 2004 using the PsycINFO online abstract database. Specifically, keyword searches for “evol*” or “Darwin*” revealed that the percentage of JPSP articles drawing on evolutionarytheory was modest, but increased significantly between 1985 and 2004. To compare the growing impact of evolutionary psychology with other psychological areas, similar keywords searches were performed in JPSP for emotion and motivation, judgment and decision making, neuroscience and psychophysiology, stereotyping and prejudice, and terror management theory. The increase in evolutionarytheory in JPSP over time was practically equal to the mean increase over time for the other five areas. Thus, evolutionary psychology has played an increasing role in shaping personality and social psychological research over the past 20 years, and is growing at a rate consistent with other emerging psychological areas.

In this and an accompanying paper we review the use of game theoretical concepts in cell biology and molecular biology. This review focuses on the subcellular level by considering viruses, genes, and molecules as players. We discuss in which way catalytic RNA can be treated by game theory. Moreover, genes can compete for success in replication and can have different strategies in interactions with other genetic elements. Also transposable elements, or "jumping genes", can act as players because they usually bear different traits or strategies. Viruses compete in the case of co-infecting a host cell. Proteins interact in a game theoretical sense when forming heterodimers. Finally, we describe how the Shapley value can be applied to enzymes in metabolic pathways. We show that game theory can be successfully applied to describe and analyse scenarios at the molecular level resulting in counterintuitive conclusions.

Objective: Ideas from evolutionarytheories are increasingly taken up in health promotion. This article seeks to demonstrate how such a trend has the potential to embed essentialist and limiting stereotypes of women and men in health promotion practice. Design: We draw on material gathered for a larger ethnographic study that examined how…

Dempster-Shafer evidence theory is a primary methodology for multisource information fusion because it is good at dealing with uncertain information. This theory provides a Dempster's rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multievidence system. Within the proposed ECR, we develop a Jaccard matrix game to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution's stability and convergence, have been mathematically proved as well.

It is shown that if, on empirical grounds, one rules out the existence of cosmic fields of Dicke-Brans (scalar) and Will Nordvedt (vector, tensor) type, then the most general experimentally viable and theoretically reasonable theory of gravitation seems to be a LAMBDA-dependent generalization of Einstein and Yilmez theories, which reduces to the former for LAMBDA=0 and to the latter for LAMBDA=1

For more than 150 years, all biological theories, including those of C. Darwin and Mendel, were based on the idea of synchronous evolution. They fit for unitary monomodal systems (asexual, symmetrical) but do not work for binary (dioecious, asymmetrical) ones. Examples of such binary conjugated differentiations are two sexes, DNA-proteins, autosomes-sex chromosomes, right and left brain hemispheres, and hands. For their understanding, "asynchronous" theories are needed. Such theories were proposed by Russian theoretical biologist Vigen A. Geodakyan for sexual, brain and body, and chromosomal differentiations. All theories are interconnected and are based on the principle of conjugated subsystems. This article covers the basic tenets of the evolutionarytheory of asymmetry and answers the following questions: What benefits does lateralization provide? What logic, what principle is it based on? Why do brain hemispheres control the opposite sides of the body? Why laterality is closely related to sex? What are the biological prerequisites of terrorism?

The thesis focuses on various issues of general equilibrium theory and can approximately be divided into three parts. The first part of the thesis studies generalized equilibria in the Arrow-Debreu model in the situation where the strong survival assumption is not satisfied. Chapter four deals with

The extension of the principle of relativity to general coordinate systems is based on the hypothesis that an accelerated observer is locally equivalent to a hypothetical inertial observer with the same velocity as the noninertial observer. This hypothesis of locality is expected to be valid for classical particle phenomena as well as for classical wave phenomena but only in the short-wavelength approximation. The generally covariant theory is therefore expected to be in conflict with the quantum theory which is based on wave-particle duality. This is explicitly demonstrated for the frequency of electromagnetic radiation measured by a uniformly rotating observer. The standard Doppler formula is shown to be valid only in the geometric optics approximation. A new definition for the frequency is proposed, and the resulting formula for the frequency measured by the rotating observer is shown to be consistent with expectations based on the classical theory of electrons. A tentative quantum theory is developed on the basis of the generalization of the Bohr frequency condition to include accelerated observers. The description of the causal sequence of events is assumed to be independent of the motion of the observer. Furthermore, the quantum hypothesis is supposed to be valid for all observers. The implications of this theory are critically examined. The new formula for frequency, which is still based on the hypothesis of locality, leads to the observation of negative energy quanta by the rotating observer and is therefore in conflict with the quantum theory

Human cultural traits-behaviors, ideas, and technologies that can be learned from other individuals-can exhibit complex patterns of transmission and evolution, and researchers have developed theoretical models, both verbal and mathematical, to facilitate our understanding of these patterns. Many of the first quantitative models of cultural evolution were modified from existing concepts in theoretical population genetics because cultural evolution has many parallels with, as well as clear differences from, genetic evolution. Furthermore, cultural and genetic evolution can interact with one another and influence both transmission and selection. This interaction requires theoretical treatments of gene-culture coevolution and dual inheritance, in addition to purely cultural evolution. In addition, cultural evolutionarytheory is a natural component of studies in demography, human ecology, and many other disciplines. Here, we review the core concepts in cultural evolutionarytheory as they pertain to the extension of biology through culture, focusing on cultural evolutionary applications in population genetics, ecology, and demography. For each of these disciplines, we review the theoretical literature and highlight relevant empirical studies. We also discuss the societal implications of the study of cultural evolution and of the interactions of humans with one another and with their environment.

The evolutionist theory proposed by Darwin is one of the fundamental pillars in biology. Darwin's theory was solidified with the modern synthesis of evolutionary biology thanks to the rediscovery of Mendel's work, which laid the genetic basis of heredity. In recent years, great progress has been acquired in the sequencing and analyses of complete genomes, which have provided several elements to discuss some Darwinists tenets of evolution. The evidence of gene duplication and whole-genome duplication, the horizontal gene transfer and the endosymbiosis process question the idea that evolution proceeds through the gradual accumulation of infinitesimally small random changes. The new evidence of neutral selection on the genomics context reveals other mechanisms of evolution not necessarily related with the idea of progress or with an adaptationist program as was originally stated by the Darwin's theory. in this paper, I present these and other concepts such as gene regulation, molecular mechanisms of development and some environmental aspects (epigenesis and phenotypic plasticity) as starting points to think in the necessity to update the evolutionarytheory which in my opinion should be more inclusive, pluralistic and consistent with our current knowledge.

This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...

We present a general method for analyzing the runtime of parallel evolutionary algorithms with spatially structured populations. Based on the fitness-level method, it yields upper bounds on the expected parallel runtime. This allows for a rigorous estimate of the speedup gained by parallelization. Tailored results are given for common migration topologies: ring graphs, torus graphs, hypercubes, and the complete graph. Example applications for pseudo-Boolean optimization show that our method is easy to apply and that it gives powerful results. In our examples the performance guarantees improve with the density of the topology. Surprisingly, even sparse topologies such as ring graphs lead to a significant speedup for many functions while not increasing the total number of function evaluations by more than a constant factor. We also identify which number of processors lead to the best guaranteed speedups, thus giving hints on how to parameterize parallel evolutionary algorithms.

Social interactions are characterized by distinct forms of interdependence, each of which has unique effects on how behavior unfolds within the interaction. Despite this, little is known about the psychological mechanisms that allow people to detect and respond to the nature of interdependence in any given interaction. We propose that interdependence theory provides clues regarding the structure of interdependence in the human ancestral past. In turn, evolutionary psychology offers a framework for understanding the types of information processing mechanisms that could have been shaped under these recurring conditions. We synthesize and extend these two perspectives to introduce a new theory: functional interdependence theory (FIT). FIT can generate testable hypotheses about the function and structure of the psychological mechanisms for inferring interdependence. This new perspective offers insight into how people initiate and maintain cooperative relationships, select social partners and allies, and identify opportunities to signal social motives.

The Lorenz-Mie theory, describing the interaction between a homogeneous sphere and an electromagnetic plane wave, is likely to be one of the most famous theories in light scattering. But, with the advent of lasers and their increasing development in various fields, it has become too old-fashioned to meet most of the modern requisites. The book deals with generalized Lorenz-Mie theories when the illuminating beam is an electromagnetic arbitrary shaped beam, relying on the method of separation of variables. A particular emphasis is stressed on the case of the homogeneous sphere but other regular particles are considered too. An extensive discussion of the methods available to the evaluation of beam shape coefficients describing the illuminating beam is provided, and several methods are discussed. Applications concern many fields such as optical particle sizing and, more generally, optical particle characterization, morphology-dependent resonances, or mechanical effects of light for optical trapping, optical twe...

The idea that behavior is selected by its consequences in a process analogous to organic evolution has been discussed for over 100 years. A recently proposed theory instantiates this idea by means of a genetic algorithm that operates on a population of potential behaviors. Behaviors in the population are represented by numbers in decimal integer (phenotypic) and binary bit string (genotypic) forms. One behavior from the population is emitted at random each time tick, after which a new population of potential behaviors is constructed by recombining parent behavior bit strings. If the emitted behavior produced a benefit to the organism, then parents are chosen on the basis of their phenotypic similarity to the emitted behavior; otherwise, they are chosen at random. After parent behavior recombination, the population is subjected to a small amount of mutation by flipping random bits in the population's bit strings. The behavior generated by this process of selection, reproduction, and mutation reaches equilibrium states that conform to every empirically valid equation of matching theory, exactly and without systematic error. These equations are known to describe the behavior of many vertebrate species, including humans, in a variety of experimental, naturalistic, natural, and social environments. The evolutionarytheory also generates instantaneous dynamics and patterns of preference change in constantly changing environments that are consistent with the dynamics of live-organism behavior. These findings support the assertion that the world of behavior we observe and measure is generated by evolutionary dynamics. PsycINFO Database Record (c) 2013 APA, all rights reserved

Our brain is a complex system of interconnected regions spontaneously organized into distinct networks. The integration of information between and within these networks is a continuous process that can be observed even when the brain is at rest, i.e. not engaged in any particular task. Moreover, such spontaneous dynamics show predictive value over individual cognitive profile and constitute a potential marker in neurological and psychiatric conditions, making its understanding of fundamental importance in modern neuroscience. Here we present a theoretical and mathematical model based on an extension of evolutionary game theory on networks (EGN), able to capture brain's interregional dynamics by balancing emulative and non-emulative attitudes among brain regions. This results in the net behavior of nodes composing resting-state networks identified using functional magnetic resonance imaging (fMRI), determining their moment-to-moment level of activation and inhibition as expressed by positive and negative shifts in BOLD fMRI signal. By spontaneously generating low-frequency oscillatory behaviors, the EGN model is able to mimic functional connectivity dynamics, approximate fMRI time series on the basis of initial subset of available data, as well as simulate the impact of network lesions and provide evidence of compensation mechanisms across networks. Results suggest evolutionary game theory on networks as a new potential framework for the understanding of human brain network dynamics.

Calls for improving research-informed policy in education are everywhere. Yet, while there is an increasing trend towards science-based practice, there remains little agreement over which of the sciences to consult and how to organize a collective effort between them. What Education lacks is a general theoretical framework through which policies can be constructed, implemented, and assessed. This dissertation submits that evolutionarytheory can provide a suitable framework for coordinating educational policies and practice, and can provide the entire field of education with a clearer sense of how to better manage the learning environment. This dissertation explores two broad paths that outline the conceptual foundations for an Evolutionary Education Science: "Teaching Evolution" and "Using Evolution to Teach." Chapter 1 introduces both of these themes. After describing why evolutionary science is best suited for organizing education research and practice, Chapter 1 proceeds to "teach" an overview of the "evolutionary toolkit"---the mechanisms and principles that underlie the modern evolutionary perspective. The chapter then employs the "toolkit" in examining education from an evolutionary perspective, outlining the evolutionary precepts that can guide theorizing and research in education, describing how educators can "use evolution to teach.". Chapters 2-4 expand on this second theme. Chapters 2 and 3 describe an education program for at-risk 9th and 10th grade students, the Regents Academy, designed entirely with evolutionary principles in mind. The program was rigorously assessed in a randomized control design and has demonstrated success at improving students' academic performance (Chapter 2) and social & behavioral development (Chapter 3). Chapter 4 examines current teaching strategies that underlie effective curriculum-instruction-assessment practices and proposes a framework for organizing successful, evidence-based strategies for neural

The first-principles order parameter theory of freezing, proposed in an earlier work, has been successful in yielding quantitative agreement with known freezing parameters for monoatomic liquids forming solids with one atom per unit cell. A generalization of this theory is presented here to include the effects of a basis set of many atoms per unit cell. The basic equations get modified by the 'density structure factors' fsub(i) which arise from the density variations within the unit cell. Calculations are presented for the important case of monoatomic liquids freezing into hexagonal close packed solids. It is concluded that all freezing transitions can be described by using structural correlations in the liquid instead of the pair potential; and that the three body correlations are important in deciding the type of solid formed after freezing. (author)

While General Systems Theory (GST) concepts appear to be applicable in explaining some of the phenomena that occur in a Gestalt Therapy group, research is needed to support this assumption. General Systems Theory may not be a group theory per se. Instead, GST may be a theory about groups. A meta-theory exists where its value and usefulness is…

In this paper it is discussed the theory of generalized Bessel functions which are of noticeable importance in the analysis of scattering processes for which the dipole approximation cannot be used. These functions have been introduced in their standard form and their modified version. The relevant generating functions and Graf-type addition theorems have been stated. The usefulness of the results to construct a fast algorithm for their quantitative computation is also devised. It is commented on the possibility of getting two-index generalized Bessel functions in e.g. the study of sum rules of the type Σ n=-∞ ∞ t n J n 3 (x), where J n is the cylindrical Bessel function of the first kind. The usefulness of the results for problems of practical interest is finally commented on. It is shown that a modified Anger function can be advantageously introduced to get an almost straightforward computation of the Bernstein sum rule in the theory of ion waves

In evolutionary computing (EC), population size is one of the critical parameters that a researcher has to deal with. Hence, it was no surprise that the pioneers of EC, such as De Jong (1975) and Holland (1975), had already studied the population sizing from the very beginning of EC. What is perhaps surprising is that more than three decades later, we still largely depend on the experience or ad-hoc trial-and-error approach to set the population size. For example, in a recent monograph, Eiben and Smith (2003) indicated: "In almost all EC applications, the population size is constant and does not change during the evolutionary search." Despite enormous research on this issue in recent years, we still lack a well accepted theory for population sizing. In this paper, I propose to develop a population dynamics theory forEC with the inspiration from the population dynamics theory of biological populations in nature. Essentially, the EC population is considered as a dynamic system over time (generations) and space (search space or fitness landscape), similar to the spatial and temporal dynamics of biological populations in nature. With this conceptual mapping, I propose to 'transplant' the biological population dynamics theory to EC via three steps: (i) experimentally test the feasibility—whether or not emulating natural population dynamics improves the EC performance; (ii) comparatively study the underlying mechanisms—why there are improvements, primarily via statistical modeling analysis; (iii) conduct theoretical analysis with theoretical models such as percolation theory and extended evolutionary game theory that are generally applicable to both EC and natural populations. This article is a summary of a series of studies we have performed to achieve the general goal [27][30]-[32]. In the following, I start with an extremely brief introduction on the theory and models of natural population dynamics (Sections 1 & 2). In Sections 4 to 6, I briefly discuss three

Full Text Available In this article, I examined what might be called the evolutionary argument against human uniqueness and human dignity. After having rehearsed briefly the roots of the classical Judeo- Christian view on human uniqueness and human dignity in the first chapters of Genesis, I went on to explore and delineate the nature of the evolutionary argument against this view. Next, I examined whether Christian theology might widen the concept of imago Dei so as to include other beings as well as humans, thus giving up the idea of human uniqueness. I concluded, however, that this move is deeply problematic. Therefore, I turned to a discussion of some recent attempts to define both human uniqueness and the image of God in theological rather than empirical terms. One of these, which is based on the concept of incarnation, is found wanting, but another one is construed in such a way that it enables us to reconcile the idea of human uniqueness as encapsulated in the doctrine of the imago Dei with contemporary evolutionarytheory. Thus, this article can be seen as an exercise in bringing classical Christian theology to terms with evolution, further highlighting this theology’s ongoing vitality. Evolusieteorie, menslike uniekheid and die beeld van God. In hierdie artikel ondersoek ek die sogenaamde evolusionêre argument teen menslike uniekheid en menswaardigheid. Na ‘n kort oorsig oor die oorsprong van die klassieke Joods-Christelike siening van menslike uniekheid en menswaardigheid soos uit die eerste vyf hoofstukke van Genesis blyk, ondersoek en beeld ek die aard van die evolusionêre argument hierteenoor uit. Vervolgens word die vraag ondersoek of die Christelike teologie die konsep van imago Dei sodanig kan verbreed dat dit ook ander wesens behalwe mense kan insluit, waardeur die idee van menslike uniekheid dus prysgegee word. Ek kom egter tot die slotsom dat hierdie skuif hoogs problematies is. Daarom wend ek my tot ’n bespreking van onlangse pogings om

Migraine is a very common disorder with a raising incidence. The theory of evolution allow us to explain the emergence of the disorder, due to the advantages that the overreactivity to stimulus provided to ancestral groups of Homo sapiens, and a greater presence of the disorder in modern societies, based in the interactions with external factors. Herein we analyze these points. Design of organisms and their responses to environmental factors emerge to improve survival. Thus pain and headache can be contemplated as homeostatic and adaptative responses. Below 10% of the population has no experience with headache and the migrainous phenotype is quite frequent in secondary headaches and in syndromic forms of migraine. These features can be understood under the next undergrounds: specific neurophysiological data (lack of habituation, sensibilization and low preactivation), genetic features (polygenic disorder with the implication of many gens with a low penetrance, that interact with the environment and are shared with comorbid disorders such as depression and anxiety); and environmental interactions in modern societies (increase in the number of estrogenic cycles and particularly overexposition to stress). A feature that was an evolutionary advantage has been transformed in a highly prevalent and disabling disorder in modern societies. It is the result of the interaction with internal (estrogenic cycles) and external (stress) stimuli. As a consequence, it becomes a mismatch disorder. The effects appear in childhood through epigenetics. Therefore, therapeutic interventions would yield greater benefits if whole populations were included in educative interventions incorporating these aspects.

Evolutionary psychologists are personally liberal, just as social psychologists are. Yet their research has rarely been perceived as liberally biased--if anything, it has been erroneously perceived as motivated by conservative political agendas. Taking a closer look at evolutionary psychologists might offer the broader social psychology community guidance in neutralizing some of the biases Duarte et al. discuss.

This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.

Cultural evolutionarytheory is an interdisciplinary field in which human culture is viewed as a Darwinian process of variation, competition, and inheritance, and the tools, methods, and theories developed by evolutionary biologists to study genetic evolution are adapted to study cultural change. It is argued here that an integration of the theories and findings of mainstream social psychology and of cultural evolutionarytheory can be mutually beneficial. Social psychology provides cultural evolution with a set of empirically verified microevolutionary cultural processes, such as conformity, model-based biases, and content biases, that are responsible for specific patterns of cultural change. Cultural evolutionarytheory provides social psychology with ultimate explanations for, and an understanding of the population-level consequences of, many social psychological phenomena, such as social learning, conformity, social comparison, and intergroup processes, as well as linking social psychology with other social science disciplines such as cultural anthropology, archaeology, and sociology.

The accumulation of somatic mutations, to which the cellular genome is permanently exposed, often leads to cancer. Analysis of any tumour shows that, besides the malignant cells, one finds other 'supporting' cells such as fibroblasts, immune cells of various types and even blood vessels. Together, these cells generate the microenvironment that enables the malignant cell population to grow and ultimately lead to disease. Therefore, understanding the dynamics of tumour growth and response to therapy is incomplete unless the interactions between the malignant cells and normal cells are investigated in the environment in which they take place. The complex interactions between cells in such an ecosystem result from the exchange of information in the form of cytokines- and adhesion-dependent interactions. Such processes impose costs and benefits to the participating cells that may be conveniently recast in the form of a game pay-off matrix. As a result, tumour progression and dynamics can be described in terms of evolutionary game theory (EGT), which provides a convenient framework in which to capture the frequency-dependent nature of ecosystem dynamics. Here, we provide a tutorial review of the central aspects of EGT, establishing a relation with the problem of cancer. Along the way, we also digress on fitness and of ways to compute it. Subsequently, we show how EGT can be applied to the study of the various manifestations and dynamics of multiple myeloma bone disease and its preceding condition known as monoclonal gammopathy of undetermined significance. We translate the complex biochemical signals into costs and benefits of different cell types, thus defining a game pay-off matrix. Then we use the well-known properties of the EGT equations to reduce the number of core parameters that characterize disease evolution. Finally, we provide an interpretation of these core parameters in terms of what their function is in the ecosystem we are describing and generate

A previous study of the Kawai, Lewellen and Tye (KLT) relations between gravity and gauge theories, imposed by the relationship of closed and open strings, are here extended in the light of general relativity and Yang-Mills theory as effective field theories. We discuss the possibility of generalizing the traditional KLT mapping in this effective setting. A generalized mapping between the effective Lagrangians of gravity and Yang-Mills theory is presented, and the corresponding operator relations between gauge and gravity theories at the tree level are further explored. From this generalized mapping remarkable diagrammatic relations are found, linking diagrams in gravity and Yang-Mills theory, as well as diagrams in pure effective Yang-Mills theory. Also the possibility of a gravitational coupling to an antisymmetric field in the gravity scattering amplitude is considered, and shown to allow for mixed open-closed string solutions, i.e., closed heterotic strings

Researchers administered surveys to college zoology students prior to, and immediately following a study of evolutionarytheory, to assess their understanding and acceptance of evidence supporting the theory. Results showed students had many misconceptions about the theory. Their beliefs interfered with their ability to objectively view scientific…

Earth systems increase in complexity, diversity, and interconnectedness with time, driven by tectonic/solar energy that keeps the systems far from equilibrium. The evolution of Earth systems is facilitated by three evolutionary mechanisms: "elaboration," "fractionation," and "self-organization," that share…

The use of general systems theory in the field of instructional systems design (ISD) is explored in this paper. Drawing on work by Young, the writings of 12 representative ISD writers and researchers were surveyed to determine the use of 60 general systems theory concepts by the individual authors. The average number of concepts used by these…

It is shown that the only Kasner-like solution of the Generalized Field Theory field equations with a nonzero electromagnetic field corresponds to an empty field geometry of the space-time. In this case, the electromagnetic field tensors of the theory coincide as could be expected from general considerations. 6 refs. (author)

Three experiments investigated the contrasting predictions of the evolutionary and decision-theoretic approaches to deontic reasoning. Two experiments embedded a hazard management (HM) rule in a social contract scenario that should lead to competition between innate modules. A 3rd experiment used a pure HM task. Threatening material was also…

Failure to understand evolutionary dynamics has been hypothesized as limiting our ability to control biological systems. An increasing awareness of similarities between macroscopic ecosystems and cellular tissues has inspired optimism that game theory will provide insights into the progression and control of cancer. To realize this potential, the ability to compare game theoretic models and experimental measurements of population dynamics should be broadly disseminated. In this tutorial, we present an analysis method that can be used to train parameters in game theoretic dynamics equations, used to validate the resulting equations, and used to make predictions to challenge these equations and to design treatment strategies. The data analysis techniques in this tutorial are adapted from the analysis of reaction kinetics using the method of initial rates taught in undergraduate general chemistry courses. Reliance on computer programming is avoided to encourage the adoption of these methods as routine bench activities.

The purpose of this paper is to contribute to recent efforts to ground evolutionarytheory in economics in the principles of Universal Darwinism. The paper contrasts two views of evolution, based on the Ultra-Darwinian and Naturalist theory of biological evolution, both of which are consistent with

Bloch oscillations of electrons are shown to occur for cases when the energy spectrum does not consist of the traditional evenly-spaced ladders and the potential gradient does not result from an external electric field. A theory of such generalized Bloch oscillations is presented and an exact...... oscillations. We stipulate that the presented theory of generalized Bloch oscillations can be extended to other systems such as acoustics and photonics....

The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalizedtheory. (Auth.)

This book provides a simple introduction to a nonlinear theory of generalized functions introduced by J.F. Colombeau, which gives a meaning to any multiplication of distributions. This theory extends from pure mathematics (it presents a faithful generalization of the classical theory of C? functions and provides a synthesis of most existing multiplications of distributions) to physics (it permits the resolution of ambiguities that appear in products of distributions), passing through the theory of partial differential equations both from the theoretical viewpoint (it furnishes a concept of weak solution of pde's leading to existence-uniqueness results in many cases where no distributional solution exists) and the numerical viewpoint (it introduces new and efficient methods developed recently in elastoplasticity, hydrodynamics and acoustics). This text presents basic concepts and results which until now were only published in article form. It is in- tended for mathematicians but, since the theory and applicati...

in molecular fluids. To discuss these phenomena in detail, molecular dynamics simulations of molecular chlorine are performed for three different state points. In general, the theory captures the behavior for small wavevector and frequencies as expected. For example, in the hydrodynamic regime......The extended Navier-Stokes theory accounts for the coupling between the translational and rotational molecular degrees of freedom. In this paper, we generalize this theory to non-zero frequencies and wavevectors, which enables a new study of spatio-temporal correlation phenomena present...... and for molecular fluids with small moment of inertia like chlorine, the theory predicts that the longitudinal and transverse intrinsic angular velocity correlation functions are almost identical, which is also seen in the molecular dynamics simulations. However, the theory fails at large wavevector and frequencies...

With the birth of quantum field theory in the late twenties physicists decided that nature could not be half classical and half quantum, and that the gravitational field ought to be quanticized, just as the electromagnetic field had been. One could accept the group of differomorphisms as a fundamental characteristic of general relativity (and indeed of all general-relativistic theories), and proceed to construct a quantum field-theory that was adapted to that group. Quantization would be attempted by way of a Hamiltonian formulation of the (classical) theory, and quantum commutation relations be patterned after the Poisson brackets arising in that formulation. This program is usually called the canonical quantization program, whereas the weak-field approach is known as covariant quantization. The first steps, conceived entirely within the framework of the classical theory, turned out to be beset with technical and conceptual difficulties, which today are essentially resolved. In this paper the author traces out these initial steps

On-shell methods offer an alternative definition of quantum field theory at tree-level, replacing Feynman diagrams with recursion relations and interaction vertices with a handful of seed scattering amplitudes. In this paper we determine the simplest recursion relations needed to construct a general four-dimensional quantum field theory of massless particles. For this purpose we define a covering space of recursion relations which naturally generalizes all existing constructions, including those of BCFW and Risager. The validity of each recursion relation hinges on the large momentum behavior of an n-point scattering amplitude under an m-line momentum shift, which we determine solely from dimensional analysis, Lorentz invariance, and locality. We show that all amplitudes in a renormalizable theory are 5-line constructible. Amplitudes are 3-line constructible if an external particle carries spin or if the scalars in the theory carry equal charge under a global or gauge symmetry. Remarkably, this implies the 3-line constructibility of all gauge theories with fermions and complex scalars in arbitrary representations, all supersymmetric theories, and the standard model. Moreover, all amplitudes in non-renormalizable theories without derivative interactions are constructible; with derivative interactions, a subset of amplitudes is constructible. We illustrate our results with examples from both renormalizable and non-renormalizable theories. Our study demonstrates both the power and limitations of recursion relations as a self-contained formulation of quantum field theory.

This paper proposes a route choice analytic method that embeds cumulative prospect theory in evolutionary game theory to analyze how the drivers adjust their route choice behaviors under the influence of the traffic information. A simulated network with two alternative routes and one variable message sign is built to illustrate the analytic method. We assume that the drivers in the transportation system are bounded rational, and the traffic information they receive is incomplete. An evolutionary game model is constructed to describe the evolutionary process of the drivers' route choice decision-making behaviors. Here we conclude that the traffic information plays an important role in the route choice behavior. The driver's route decision-making process develops towards different evolutionary stable states in accordance with different transportation situations. The analysis results also demonstrate that employing cumulative prospect theory and evolutionary game theory to study the driver's route choice behavior is effective. This analytic method provides an academic support and suggestion for the traffic guidance system, and may optimize the travel efficiency to a certain extent.

Full Text Available This paper proposes a route choice analytic method that embeds cumulative prospect theory in evolutionary game theory to analyze how the drivers adjust their route choice behaviors under the influence of the traffic information. A simulated network with two alternative routes and one variable message sign is built to illustrate the analytic method. We assume that the drivers in the transportation system are bounded rational, and the traffic information they receive is incomplete. An evolutionary game model is constructed to describe the evolutionary process of the drivers’ route choice decision-making behaviors. Here we conclude that the traffic information plays an important role in the route choice behavior. The driver’s route decision-making process develops towards different evolutionary stable states in accordance with different transportation situations. The analysis results also demonstrate that employing cumulative prospect theory and evolutionary game theory to study the driver’s route choice behavior is effective. This analytic method provides an academic support and suggestion for the traffic guidance system, and may optimize the travel efficiency to a certain extent.

The general covariance principle in the string field theory is considered. The algebraic properties of the string Lie derivative are discussed. The string vielbein and spin connection are introduced and an action invariant under general co-ordinate transformation is proposed. (author). 18 refs

Evolutionary responses that rescue populations from extinction when drastic environmental changes occur can be friend or foe. The field of conservation biology is concerned with the survival of species in deteriorating global habitats. In medicine, in contrast, infected patients are treated with chemotherapeutic interventions, but drug resistance can compromise eradication of pathogens. These contrasting biological systems and goals have created two quite separate research communities, despite addressing the same central question of whether populations will decline to extinction or be rescued through evolution. We argue that closer integration of the two fields, especially of theoretical understanding, would yield new insights and accelerate progress on these applied problems. Here, we overview and link mathematical modelling approaches in these fields, suggest specific areas with potential for fruitful exchange, and discuss common ideas and issues for empirical testing and prediction.

Only within the last 20 years has it been possible to conduct far-reaching experimental tests of the validity of Einstein's General Relativity Theory. Experimental confirmation in some fields is embarrassed by considerable difficulties in applying the theory to cosmic systems, which indicate that such major systems lie at the limit of the theory's applicability. The lecture here reproduced discusses both the successes and the limitations of the theory, starting with its replacement of the absolute space-time theory of Newton and its historical replacement by the relativistic gravitational postulates of Einstein which, in spite of its more complicated postulates, nevertheless introduced a great simplicity and comprehensiveness into the overall conception of nature. This theoretical 'beauty', however, can only be trusted if vindicated experimentally, which has to a considerable extent proved to be the case. For weak fields Newtonian and Einsteinian concepts coincide, while for stronger fields, and velocities not far from that of light, Einstein's theory is superior, giving,for example, an excellent correspondence with the precession of the perehelion of Mercury. On a larger scale, however, the theory appears to lead to conclusions which would invalidate the very concepts of space and time, even within a finite time-interval. A more generalizedtheory seems to be required. (A.D.N.)

Full Text Available The paper establishes the evolutionary mechanism model of agile supply chain network by means of complex network theory which can be used to describe the growth process of the agile supply chain network and analyze the complexity of the agile supply chain network. After introducing the process and the suitability of taking complex network theory into supply chain network research, the paper applies complex network theory into the agile supply chain network research, analyzes the complexity of agile supply chain network, presents the evolutionary mechanism of agile supply chain network based on complex network theory, and uses Matlab to simulate degree distribution, average path length, clustering coefficient, and node betweenness. Simulation results show that the evolution result displays the scale-free property. It lays the foundations of further research on agile supply chain network based on complex network theory.

Canonical perturbation theory in linearized general relativity theory is developed. It is shown that the evolution of arbitrary dynamic value, conditioned by the interaction of particles, gravitation and electromagnetic fields, can be presented in the form of a series, each member of it corresponding to the contribution of certain spontaneous or induced process. The main concepts of the approach are presented in the approximation of a weak gravitational field

Studies have found that, under a wide variety of social circumstances, females are more likely than males to smile. The present article offers a theoretical explanation for this difference based on the premise that testosterone (along with other sex hormones) has evolved the tendency to alter brain functioning in ways that inhibit male smiling, especially during their most reproductively active years. Underlying the theory are the assumptions that (a) females have been naturally selected for preferring to mate with males who have the ability to assist in long-term child rearing primarily by provisioning resources, that (b) males partially accommodate this female preference by competing with rival males who are also vying for resources with which to attract mates, and that (c) male smiling interferes with their ability to most effectively intimidate rivals. If this reasoning is correct, genes must be involved in promoting the tendency to compete for resources, the most likely location for which would be on the Y-chromosome. According to the present theory, these genes operate in part by inhibiting social signals of fear and submissiveness. An additional element of the theory asserts that testosterone alters brain functioning in ways that shift the neocortex away from the left (more "prosocial and friendly") hemisphere toward the right (less "prosocial and friendly") hemisphere. Current evidence bearing on the theory is reviewed and a number of largely untested hypotheses are derived from the theory for future assessment of its predictive power.

In this article, we present an approach for the generalization of adsorption of light gases in porous materials. This new theory goes beyond Langmuir and Brunauer-Emmett-Teller theories, which are the standard approaches that have a limited application to crystalline porous materials by their unphysical assumptions on the amount of possible adsorption layers. The derivation of a more general equation for any crystalline porous framework is presented, restricted multilayer theory. Our approach allows the determination of gas uptake considering only geometrical constraints of the porous framework and the interaction energy of the guest molecule with the framework. On the basis of this theory, we calculated optimal values for the adsorption enthalpy at different temperatures and pressures. We also present the use of this theory to determine the optimal linker length for a topologically equivalent framework series. We validate this theoretical approach by applying it to metal-organic frameworks (MOFs) and show that it reproduces the experimental results for seven different reported materials. We obtained the universal equation for the optimal linker length, given the topology of a porous framework. This work applied the general equation to MOFs and H 2 to create energy-storage materials; however, this theory can be applied to other crystalline porous materials and light gases, which opens the possibility of designing the next generations of energy-storage materials by first considering only the geometrical constraints of the porous materials.

A formal theory is developed for the scattering of time-harmonic electromagnetic waves from impenetrable immobile obstacles with given linear, homogeneous, and generally nonlocal boundary conditions of Leontovich (impedance) type for the wave of the obstacle's surface. The theory is modeled on the complete Green's function and the transition (T) operator in time-independent formal scattering theory of nonrelativistic quantum mechanics. An expression for the differential scattering cross section for plane electromagnetic waves is derived in terms of certain matrix elements of the T operator for the obstacle.

Some possibilities of reconciling general relativity with quantum theory are discussed. The procedure of quantization is certainly not unique, but depends upon the choice of the coordinate conditions. Most versions of quantization predict the existence of gravitons, but it is also possible to formulate a quantum theory with a classical gravity whereby the expectation values of Tsub(μν) constitute the sources of the classical metric field. (author)

In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest

Bloch oscillations of electrons are shown to occur for cases when the energy spectrum does not consist of the traditional evenly-spaced ladders and the potential gradient does not result from an external electric field. A theory of such generalized Bloch oscillations is presented and an exact calculation is given to confirm this phenomenon. Our results allow for a greater freedom of design for experimentally observing Bloch oscillations. For strongly coupled oscillator systems displaying Bloch oscillations, it is further demonstrated that reordering of oscillators leads to destruction of Bloch oscillations. We stipulate that the presented theory of generalized Bloch oscillations can be extended to other systems such as acoustics and photonics. (paper)

and geometry over the field with one element. It also permits the construction of important Arakelov theoretical objects, such as the completion \\Spec Z of Spec Z. In this thesis, we prove a projective bundle theorem for the eld with one element and compute the Chow rings of the generalized schemes Sp\\ec ZN......Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry......, appearing in the construction of \\Spec Z....

Recent upgrades to cognitive load theory suggest that evolutionary processes have shaped the way that working memory processes cultural and social information. According to evolutionarily educational psychologists, some forms of information are processed with lower working memory loads than other forms. The former are evolutionarily salient and…

We investigate the generalized second law of thermodynamics (GSL) in generalizedtheories of gravity. We examine the total entropy evolution with time including the horizon entropy, the non-equilibrium entropy production, and the entropy of all matter, field and energy components. We derive a universal condition to protect the generalized second law and study its validity in different gravity theories. In Einstein gravity (even in the phantom-dominated universe with a Schwarzschild black hole), Lovelock gravity and braneworld gravity, we show that the condition to keep the GSL can always be satisfied. In f(R) gravity and scalar-tensor gravity, the condition to protect the GSL can also hold because the temperature should be positive, gravity is always attractive and the effective Newton constant should be an approximate constant satisfying the experimental bounds

In semilocal theories, the vacuum manifold is fibered in a non-trivial way by the action of the gauge group. Here we generalize the original semilocal theory (which was based on the Hopf bundle S 3 → S1 S 2 ) to realize the next Hopf bundle S 7 →S 3 S 4 , and its extensions S 2n+1 → S3 HP n . The semilocal defects in this class of theories are classified by π 3 (S 3 ), and are interpreted as constrained instantons or generalized sphaleron configurations. We fail to find a field theoretic realization of the final Hopf bundle S 15 →S 7 S 8 , but are able to construct other semilocal spaces realizing Stiefel bundles over grassmannian spaces. (orig.)

Based on their research experience, the authors propose a reference textbook in two volumes on the theory of generalized locally Toeplitz sequences and their applications. This first volume focuses on the univariate version of the theory and the related applications in the unidimensional setting, while the second volume, which addresses the multivariate case, is mainly devoted to concrete PDE applications. This book systematically develops the theory of generalized locally Toeplitz (GLT) sequences and presents some of its main applications, with a particular focus on the numerical discretization of differential equations (DEs). It is the first book to address the relatively new field of GLT sequences, which occur in numerous scientific applications and are especially dominant in the context of DE discretizations. Written for applied mathematicians, engineers, physicists, and scientists who (perhaps unknowingly) encounter GLT sequences in their research, it is also of interest to those working in the fields of...

Starting from Poincare's Lorentz-invariant theory of gravity formulated in 1906, development of Einstein's generaltheory of relativity during 1906-1916 is discussed. Three stages in this development are recognised. In the first stage during 1907-1914, Einstein tried to extend the relativity principle of uniform motion to the frames in non-uniform motion. For this purpose, he introduced the principle of equivalence which made it possible to calculate the effect of homogeneous gravitational field on arbitrary physical processes. During the second stage comprising years 1912-1914 overlapping the first stage, Einstein and Grossmann were struggling to translate physical postulates into the language of the absolute differential calculus. In the period 1915-1916, Einstein formulated the field equations of general relativity. While discussing these developmental stages, theories of gravitation formulated by Abraham, Nordstroem and Mie are also discussed. (M.G.B.)

The study of evolutionarytheory and fieldwork in animal behavior is enriched when students leave the classroom so they may test their abilities to think and act like scientists. This article describes a course on evolutionarytheory and animal behavior that blended on campus learning with field experience in the United States and in Ecuador and…

Discusses the trend in academic debate on policy questions toward a wide acceptance of counterplans, encouraging combinations of proposals which appear at face value able to coexist but upon deeper analysis are incompatible. Argues in opposition to this trend by applying concepts from general systems theory to competition. (KEH)

This chapter discusses General Systems Theory as it applies to education, classrooms, innovations, and instructional design. The principles of equifinality, open and closed systems, the individual as the key system, hierarchical structures, optimization, stability, cooperation, and competition are discussed, and their relationship to instructional…

Describes basic concepts in the field of general systems theory (GST) and identifies commonalities that exist between GST and instructional systems design (ISD). Models and diagrams that depict system elements in ISD are presented, and two matrices that show how GST has been used in ISD literature are included. (11 references) (LRW)

This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. At zero temperature and zero frequency...

This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse...

Full Text Available If in Gender Trouble (1990 Butler presented a proposal of the theory of performativity of speech acts applied to the construction of gender, in her last book, Notes towards a Performative Theory of Assembly (2015, she articulates a theory of performativity applied to collective and concerted action of minorities or populations that are estimated to be “disposable”. The interest of the proposal that we present in this paper is to analyze how the theory of performativity of gender is now extended to the forms of democratic action; going from being a structure that explains the possibilities of gender to explain the possibilities for a livable life. It is what we call here the extension of performativity, from the special case of gender to the general case of a livable life.

The background underlying the η-deformed AdS{sub 5}×S{sup 5} sigma-model is known to satisfy a generalization of the IIB supergravity equations. Their solutions are related by T-duality to solutions of type IIA supergravity with non-isometric linear dilaton. We show how the generalized IIB supergravity equations can be naturally obtained from exceptional field theory. Within this manifestly duality covariant formulation of maximal supergravity, the generalized IIB supergravity equations emerge upon imposing on the fields a simple Scherk-Schwarz ansatz which respects the section constraint.

While comprehensive reviews of the literature, by gathering in one place most of the relevant information, undoubtedly steer the development of every scientific field, we found that the comments in response to a review article can be as informative as the review itself, if not more. Namely, reading through the comments on the ideas expressed in Ref. [1], we could identify a number of pressing problems for evolutionary game theory, indicating just how much space there still is for major advances and breakthroughs. In an attempt to bring a sense of order to a multitude of opinions, we roughly classified the comments into three categories, i.e. those concerned with: (i) the universality of scaling in heterogeneous topologies, including empirical dynamic networks [2-8], (ii) the universality of scaling for more general game setups, such as the inclusion of multiple strategies and external features [4,9-11], and (iii) experimental confirmations of the theoretical developments [2,12,13].

Full Text Available Time has proved that Economic Analysis is not enough as to ensure all the needs of the economic field. The present study wishes to propose a new approach method of the economic phenomena and processes based on the researches made outside the economic space- a new general interpretation theory- which is centered on the human being as the basic actor of economy. A general interpretation theory must assure the interpretation of the causalities among the economic phenomena and processes- causal interpretation; the interpretation of the correlations and dependencies among indicators- normative interpretation; the interpretation of social and communicational processes in economic organizations- social and communicational interpretation; the interpretation of the community status of companies- transsocial interpretation; the interpretation of the purposes of human activities and their coherency – teleological interpretation; the interpretation of equilibrium/ disequilibrium from inside the economic systems- optimality interpretation. In order to respond to such demands, rigor, pragmatism, praxiology and contextual connectors are required. In order to progress, the economic science must improve its language, both its syntax and its semantics. The clarity of exposure requires a language clarity and the scientific theory progress asks for the need of hypotheses in the building of the theories. The switch from the common language to the symbolic one means the switch from ambiguity to rigor and rationality, that is order in thinking. But order implies structure, which implies formalization. Our paper should be a plea for these requirements, requirements which should be fulfilled by a modern interpretation theory.

After completing the final version of his generaltheory of relativity in November 1915, Albert Einstein wrote a book about relativity for a popular audience. His intention was "to give an exact insight into the theory of relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics." The book remains one of the most lucid explanations of the special and generaltheories ever written. In the early 1920s alone, it was translated into ten languages, and fifteen editions in the original German appeared over the course of Einstein's lifetime. This new edition of Einstein's celebrated book features an authoritative English translation of the text along with an introduction and a reading companion by Hanoch Gutfreund and Jürgen Renn that examines the evolution of Einstein's thinking and casts his ideas in a broader present-day context. A special chapter explores the history...

Full Text Available The explanation of marital satisfaction and stability in trajectories of couple relationships has been the central interest in different studies (Karney, Bradbury. & Johnson, 1999; Sabatelli & Ripoll, 2004; Schoebi, Karney & Bradbury, 2012. However, there are still several questions and unknown aspects surrounding the topic. Within this context, the present reflection seeks to analyze whether the principles of EvolutionaryTheory suffice to explain three marital trajectories in terms of satisfaction and stability. With this in mind, we have included other explanations proposed by the Psychosocial Theory that EvolutionaryTheory does not refer to in order to better understand mating behavior. Moreover, other factors that could account for satisfied and stable relationships were analyzed. Suggestions for future investigations include the analysis of other marital trajectories that may or may not end in separation or divorce but are not included in this article.

A general analytical theory of the five main satellites of Uranus, including the secular and short period terms hereafter denoted by GUST, is presented. A comparison is made with an internal numerical integration with nominal masses of Veillet (1983). The precision of the theory goes from about 10 km for Miranda to 100 km for Oberon. The short period terms in the motions of Titania and Oberon are larger than 500 km. They should make possible the determination of the masses of the outer satellites through the optical data of Voyager encounter.

Psychiatry faces an internal contradiction in that it regards mild sadness and low mood as normal emotions, yet when these emotions are directed toward a new infant, it regards them as abnormal. We apply parental investment theory, a widely used framework from evolutionary biology, to maternal perinatal emotions, arguing that negative emotions directed toward a new infant could serve an important evolved function. If so, then under some definitions of psychiatric disorder, these emotions are not disorders. We investigate the applicability of parental investment theory to maternal postpartum emotions among Shuar mothers. Shuar mothers' conceptions of perinatal sadness closely match predictions of parental investment theory.

Full Text Available Good cooperation mechanism is an important guarantee for the advancement of industrialization construction. To strengthen the partnership between producers, we analyze the behavior evolution trend of both parties using an evolutionary game theory. Based on the original model, the mechanism of coordination and cooperation between prefabricated producers is explained under the condition of punishment and incentive. The results indicate that stable evolutionary strategies exist under both cooperation and noncooperation, and the evolutionary results are influenced by the initial proportion of both decision-making processes. The government can support the production enterprises to establish a solid partnership through effective punishment and incentive mechanisms to reduce the initial cost in the supply chain of prefabricated construction, resulting in a win-win situation.

We continue the study of Lagrangian descriptions of N=2 Argyres-Douglas theories. We use our recent interpretation in terms of sequential confinement to guess the Lagrangians of all the Argyres-Douglas models with Abelian three dimensional mirror. We find classes of four dimensional N=1 quivers that flow in the infrared to generalized Argyres-Douglas theories, such as the ( A k , A kN + N -1) models. We study in detail how the N=1 chiral rings map to the Coulomb and Higgs Branches of the N=2 CFT's. The three dimensional mirror RG flows are shown to land on the N=4 complete graph quivers. We also compactify to three dimensions the gauge theory dual to ( A 1, D 4), and find the expected Abelianization duality with N=4 SQED with 3 flavors.

Understanding and evaluating the implementation of complex interventions in practice is an important problem for healthcare managers and policy makers, and for patients and others who must operationalize them beyond formal clinical settings. It has been argued that this work should be founded on theory that provides a foundation for understanding, designing, predicting, and evaluating dynamic implementation processes. This paper sets out core constituents of a generaltheory of implementation, building on Normalization Process Theory and linking it to key constructs from recent work in sociology and psychology. These are informed by ideas about agency and its expression within social systems and fields, social and cognitive mechanisms, and collective action. This approach unites a number of contending perspectives in a way that makes possible a more comprehensive explanation of the implementation and embedding of new ways of thinking, enacting and organizing practice. PMID:23406398

In my lecture I describe the present stage of the generaltheory of quantized fields on the example of 5 subjects. They are ordered in the direction from large to small distances. The first one is the by now classical problem of the structure of superselection sectors. It involves the behavior of the theory at spacelike infinity and is directly connected with particle statistics and internal symmetries. It has become popular in recent years by the discovery of a lot of nontrivial models in 2d conformal-field theory, by connections to integrable models and critical behavior in statistical mechanics and by the relations to the Jones' theory of subfactors in von Neumann algebras and to the corresponding geometrical objects (braids, knots, 3d manifolds, ...). At large timelike distances the by far most important feature of quantum field theory is the particle structure. This will be the second subject of my lecture. It follows the technically most involved part which is concerned with the behavior at finite distances. Two aspets, nuclearity which emphasizes the finite density of states in phase space, and the modular structure which relies on the infinite number of degrees of freedom present even locally, and their mutual relations will be treated. The next point, involving the structure at infinitesimal distances, is the connection between the Haag-Kastler framework of algebras of local and the framework of Wightman fields. Finally, problems in approaches to quantum gravity will be discussed, as far as they are accessible by the methods of the generaltheory of quantized fields. (orig.)

This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse conductivity can be explicitly computed and coincides with the classical result. In the general case, using magnetic perturbation theory, the conductivity tensor is expanded in powers of the strength of the magnetic field $B$. Then the linear term in $B$ of this expansion is written down in terms of the zero magnetic field Green function and the zero field current operator. In the periodic case, the linear term in $B$ of the conductivity tensor is expressed in terms of zero magnetic field Bloch functions and energies. No derivatives with respect to the quasimomentum appear and thereby all ambiguities are removed, in contrast to earlier work.

We study the cosmological FRW flat solutions generated in general massive gravity theories. Such a model are obtained adding to the Einstein General Relativity action a peculiar non derivative potentials, function of the metric components, that induce the propagation of five gravitational degrees of freedom. This large class of theories includes both the case with a residual Lorentz invariance as well as the case with rotational invariance only. It turns out that the Lorentz-breaking case is selected as the only possibility. Moreover it turns out that that perturbations around strict Minkowski or dS space are strongly coupled. The upshot is that even though dark energy can be simply accounted by massive gravity modifications, its equation of state w eff has to deviate from -1. Indeed, there is an explicit relation between the strong coupling scale of perturbations and the deviation of w eff from -1. Taking into account current limits on w eff and submillimiter tests of the Newton's law as a limit on the possible strong coupling scale, we find that it is still possible to have a weakly coupled theory in a quasi dS background. Future experimental improvements on short distance tests of the Newton's law may be used to tighten the deviation of w eff form -1 in a weakly coupled massive gravity theory

The theory of Sturmians and generalized Sturmians is reviewed. It is shown that when generalized Sturmians are used as basis functions, calculations on the spectra and physical properties of few-electron atoms can be performed with great ease and good accuracy. The use of many-center Coulomb Stur...... Sturmians as basis functions in calculations on N-electron molecules is also discussed. Basis sets of this type are shown to have many advantages over other types of ETO’s, especially the property of automatic scaling....

A distinctive feature of heuristically based generalized perturbation theory methodology consists in the systematic use of importance conservation concepts. As well known, this use leads to fundamental reciprocity relationship. Instead, the alternative variational and differential one approaches make a consistent use of the properties and adjoint functions. The equivalence between the importance and the adjoint functions have been demonstrated in important cases. There are some instances, however, in which the commonly known operator governing the adjoint function are not adequate. In this paper ways proposed to generalize this rules, as adopted with the heuristic generalized perturbation theory methodology, are illustrated. When applied to the neutron/nuclide field characterizing the core evolution in a power reactor system, in which also an intensive control variable (ρ) is defined, these rules leas to an orthogonality relationship connected to this same control variable. A set of ρ-mode eigenfunctions may be correspondingly defined and an extended concept of reactivity (generalizing that commonly associated with the multiplication factor) proposed as more directly indicative of the controllability of a critical reactor system. (author). 25 refs

Convective vortices are common features of atmospheres that absorb lower-entropy-energy at higher temperatures than they reject higher-entropy-energy to space. These vortices range from small to large-scale and play an important role in the vertical transport of heat, momentum, and tracer species. Thus, the development of theoretical models for convective vortices is important to our understanding of some of the basic features of planetary atmospheres. The heat engine framework is a useful tool for studying convective vortices. However, current theories assume that convective vortices are reversible heat engines. Since there are questions about how reversible real atmospheric heat engines are, their usefulness for studying real atmospheric vortices is somewhat controversial. In order to reduce this problem, a theory for convective vortices that includes irreversible processes is proposed. The paper's main result is that the proposed theory provides an expression for the pressure drop along streamlines that includes the effects of irreversible processes. It is shown that a simplified version of this expression is a generalization of Bernoulli's equation to convective circulations. It is speculated that the proposed theory not only explains the intensity, but also sheds light on other basic features of convective vortices such as their physical appearance.

A generalized Yang-Mills theory which is the non-Abelian version of the generalized eletrodinamics proposed by Podolsky is analysed both in the Lagrangian an Hamiltonian formulation. A simple class of solutions to the Euler-Lagrange equations is presented and the structure of the Hamiltonian constraints is studied in details. (Author) [pt

We extend the theory of weak gravitational lensing to cosmologies with generalized gravity, described in the Lagrangian by a generic function depending on the Ricci scalar and a nonminimal coupled scalar field. We work out the generalized Poisson equations relating the dynamics of the fluctuating components to the two gauge-invariant scalar gravitational potentials, fixing the contributions from the modified background expansion and fluctuations. We show how the lensing equation gets modified by the cosmic expansion as well as by the presence of anisotropic stress, which is non-null at the linear level both in scalar-tensor gravity and in theories where the gravitational Lagrangian term features a nonminimal dependence on the Ricci scalar. Starting from the geodesic deviation, we derive the generalized expressions for the shear tensor and projected lensing potential, encoding the spacetime variation of the effective gravitational constant and isolating the contribution of the anisotropic stress, which introduces a correction due to the spatial correlation between the gravitational potentials. Finally, we work out the expressions of the lensing convergence power spectrum as well as the correlation between the lensing potential and the integrated Sachs-Wolfe effect affecting cosmic microwave background total intensity and polarization anisotropies. To illustrate phenomenologically the effects, we work out approximate expressions for the quantities above in extended quintessence scenarios where the scalar field coupled to gravity plays the role of the dark energy

The main focus of this monograph is to offer a comprehensive presentation of known and new results on various generalizations of CS-modules and CS-rings. Extending (or CS) modules are generalizations of injective (and also semisimple or uniform) modules. While the theory of CS-modules is well documented in monographs and textbooks, results on generalized forms of the CS property as well as dual notions are far less present in the literature. With their work the authors provide a solid background to module theory, accessible to anyone familiar with basic abstract algebra. The focus of the book is on direct sums of CS-modules and classes of modules related to CS-modules, such as relative (injective) ejective modules, (quasi) continuous modules, and lifting modules. In particular, matrix CS-rings are studied and clear proofs of fundamental decomposition results on CS-modules over commutative domains are given, thus complementing existing monographs in this area. Open problems round out the work and establish the...

The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the “hybrid equilibrium,” to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith’s Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory. PMID:26348617

The evolution of socio-economic systems depend on the interdependent decision processes of its underlying system components. The mathematical model to describe the strategic decision of players within a socio-economic game is ''game theory''. ''Quantum game theory'' is a mathematical and conceptual amplification of classical game theory. The space of all conceivable decision paths is extended from the purely rational, measurable space in the Hilbert-space of complex numbers - which is the mathematical space where quantum theory is formulated. By the concept of a potential entanglement of the imaginary quantum strategy parts, it is possible to include cooperate decision path, caused by cultural or moral standards. If this strategy entanglement is large enough, then additional Nash equilibria can occur, previously present dominant strategies could become nonexistent and new evolutionary stable strategies do appear for some game classes. Within this PhD thesis the main results of classical and quantum games are summarized and all of the possible game classes of evolutionary (2 player)-(2 strategy) games are extended to quantum games. It is shown that the quantum extension of classical games with an underlying dilemma-like structure give different results, if the strength of strategic entanglement is above a certain barrier. After the German summary and the introduction paper, five different applications of the theory are discussed within the thesis. (orig.)

By solving the coupled system of kinetic equations for interacting system of electrons positrons (holes) and photons (phonons) at high external electric, arbitrary magnetic and at the propagation of strong electromagnetic waves non-equilibrium and non-stationary distribution function of photons (phonons) and charge carriers by taking into account of arbitrary heating and mutual drag of carriers and photons (phonons) was found. Author was sure that received him in 1976 distribution function of photons (phonons) must lay on the basis of Theoretical Physics of 21 Century, as the equilibrium Planck's distribution function of black-body radiation received in 1900 lied on the basis of Quantum Physics of 20 Century. Authors many years mental work (from 1976 till today) confirmed the rightness of searched him way and leads to the conclusion that Kinetic Theory is more general and fundamental theory of nature, which unificated Non-stationary Dynamics (the left-hand side) with Non-stationary Statistical Mechanics (the right-hand side) of Kinetic Equation. It is shown that other sections of Theoretical Physics such as Newtonian, Hamiltonian and Relativistic Classical Mechanics, Quantum Physics, Optics, Statistical Mechanics and Thermodynamics, Particle Physics may be received from Kinetic Theory under the special conditions and are the special parts of this theory. The problems such as the irreversibility and instability, the paradox of time, quantum paradox and others are solved. This new GeneralTheory explains all the problems and troubles contents with the foundations and interpretation of quantum mechanics and relativity. It was found the mechanism of quantization and transition from one energetic level to another,the squeezed effect, the transition of particles wave-packets through the energetic barriers. It is shown the possibility of superluminal motion of light pulses and wave-packets through the medium and photonic barriers. It is well known that the experiments

Although intentionality has been implicated as a causal variable in healing research, its definition has been inconsistent and vague. The objective of this grounded theory study is to develop a substantive theory of intentionality in a naturalistic encounter between nurse-healers and their healee-clients, and to consider the implications for practice and research. Six expert nurse-healers and six healee-clients were interviewed as individuals and in dyads before and after treatments. Interviews and observational data were analyzed using the constant comparative method and synthesized analysis. Participants described their experience of intentionality in healing as an evolutionary process characterized by distinctive shifts. The theory of intentionality: the matrix for healing (IMH) includes definitions of intentionality and a conceptual framework of three developmental phases of intentionality (generic, healing, and transforming intentionalities). The predominant attribute, development, is described. The theory contributes to knowledge about healing and intentionality and has implications for practice and future research.

The geometric form of standard quantum mechanics is compatible with the two postulates: (1) the laws of physics are invariant under the choice of experimental setup and (2) every quantum observation or event is intrinsically statistical. These postulates remain compatible within a background independent extension of quantum theory with a local intrinsic time implying the relativity of the concept of a quantum event. In this extension the space of quantum events becomes dynamical and only individual quantum events make sense observationally. At the core of such a generaltheory of quantum relativity is the three-way interplay between the symplectic form, the dynamical metric and non-integrable almost complex structure of the space of quantum events. Such a formulation provides a missing conceptual ingredient in the search for a background independent quantum theory of gravity and matter. The crucial new technical element in our scheme derives from a set of recent mathematical results on certain infinite-dimensional almost Kahler manifolds which replace the complex projective spaces of standard quantum mechanics

We study a holographic theory of general spacetimes that does not rely on the existence of asymptotic regions. This theory is to be formulated in a holographic space. When a semiclassical description is applicable, the holographic space is assumed to be a holographic screen: a codimension-1 surface that is capable of encoding states of the gravitational spacetime. Our analysis is guided by conjectured relationships between gravitational spacetime and quantum entanglement in the holographic description. To understand basic features of this picture, we catalog predictions for the holographic entanglement structure of cosmological spacetimes. We find that qualitative features of holographic entanglement entropies for such spacetimes differ from those in AdS/CFT but that the former reduce to the latter in the appropriate limit. The Hilbert space of the theory is analyzed, and two plausible structures are found: a direct-sum and "spacetime-equals-entanglement" structure. The former preserves a naive relationship between linear operators and observable quantities, while the latter respects a more direct connection between holographic entanglement and spacetime. We also discuss the issue of selecting a state in quantum gravity, in particular how the state of the multiverse may be selected in the landscape.

Evolutionary game theory is a mathematical approach to studying how social behaviors evolve. In many recent works, evolutionary competition between strategies is modeled as a stochastic process in a finite population. In this context, two limits are both mathematically convenient and biologically relevant: weak selection and large population size. These limits can be combined in different ways, leading to potentially different results. We consider two orderings: the [Formula: see text] limit, in which weak selection is applied before the large population limit, and the [Formula: see text] limit, in which the order is reversed. Formal mathematical definitions of the [Formula: see text] and [Formula: see text] limits are provided. Applying these definitions to the Moran process of evolutionary game theory, we obtain asymptotic expressions for fixation probability and conditions for success in these limits. We find that the asymptotic expressions for fixation probability, and the conditions for a strategy to be favored over a neutral mutation, are different in the [Formula: see text] and [Formula: see text] limits. However, the ordering of limits does not affect the conditions for one strategy to be favored over another.

Full Text Available This article pointed out the reasons and possibilities of co-evolutionary studies, requirements and problems to develop such studies, as well as discuss some of the central theoretical frameworks to the theory of organizations, from the perspective of coevolution (lewin and volberda , 1999. From this, we identify the possible links that can be established between different lenses of study, when integrated into a co-evolutionary study. Such links are drawn by taking the analysis of institutional theory (dimaggio and powell, 1991; meyer and rowan, 1991; scott, 1995, the transaction costs theory (williamson, 1981 and the theory of social relations in economic action (granovetter, 1992. Thus, it is expected to contribute to the discussion about the possibilities for greater integration in organizational studies, when calling attention to the importance of moving toward a more inclusive, taking into account the macro economic and social dynamics and their impact on the level the firm (in terms of effect size, identity, culture and learning processes and its inverse relationships - from the firm to the macro environment.

Highlights: • We investigate the evolutionary dynamics of human cooperation in a social network. • We introduce the concepts of “Critical Mass”, centrality measure and homophily. • The emergence of cooperation is affected by the spatial choice of the “Critical Mass”. • Our findings show that homophily speeds up the convergence towards cooperation. • Centrality and “Critical Mass” spatial choice partially offset the impact of homophily. - Abstract: As natural systems continuously evolve, the human cooperation dilemma represents an increasingly more challenging question. Humans cooperate in natural and social systems, but how it happens and what are the mechanisms which rule the emergence of cooperation, represent an open and fascinating issue. In this work, we investigate the evolution of cooperation through the analysis of the evolutionary dynamics of behaviours within the social network, where nodes can choose to cooperate or defect following the classical social dilemmas represented by Prisoner’s Dilemma and Snowdrift games. To this aim, we introduce a sociological concept and statistical estimator, “Critical Mass”, to detect the minimum initial seed of cooperators able to trigger the diffusion process, and the centrality measure to select within the social network. Selecting different spatial configurations of the Critical Mass nodes, we highlight how the emergence of cooperation can be influenced by this spatial choice of the initial core in the network. Moreover, we target to shed light how the concept of homophily, a social shaping factor for which “birds of a feather flock together”, can affect the evolutionary process. Our findings show that homophily allows speeding up the diffusion process and make quicker the convergence towards human cooperation, while centrality measure and thus the Critical Mass selection, play a key role in the evolution showing how the spatial configurations can create some hidden patterns, partially

In a generaltheory of the onset and development of the plasmoid instability is formulated by means of a principle of least time. We derive and show the scaling relations for the final aspect ratio, transition time to rapid onset, growth rate, and number of plasmoids that depend on the initial perturbation amplitude (ŵ_0), the characteristic rate of current sheet evolution (1/τ), and the Lundquist number (S). They are not simple power laws, and are proportional to S"ατ"β[ln f(S,τ,ŵ_0)]"σ. Finally, the detailed dynamics of the instability is also elucidated, and shown to comprise of a period of quiescence followed by sudden growth over a short time scale.

We develop a generaltheory for interferometry by correlation that (i) properly accounts for heterogeneously distributed sources of continuous or transient nature, (ii) fully incorporates any type of linear and nonlinear processing, such as one-bit normalization, spectral whitening and phase-weighted stacking, (iii) operates for any type of medium, including 3-D elastic, heterogeneous and attenuating media, (iv) enables the exploitation of complete correlation waveforms, including seemingly unphysical arrivals, and (v) unifies the earthquake-based two-station method and ambient noise correlations. Our central theme is not to equate interferometry with Green function retrieval, and to extract information directly from processed interstation correlations, regardless of their relation to the Green function. We demonstrate that processing transforms the actual wavefield sources and actual wave propagation physics into effective sources and effective wave propagation. This transformation is uniquely determined by the processing applied to the observed data, and can be easily computed. The effective forward model, that links effective sources and propagation to synthetic interstation correlations, may not be perfect. A forward modelling error, induced by processing, describes the extent to which processed correlations can actually be interpreted as proper correlations, that is, as resulting from some effective source and some effective wave propagation. The magnitude of the forward modelling error is controlled by the processing scheme and the temporal variability of the sources. Applying adjoint techniques to the effective forward model, we derive finite-frequency Fréchet kernels for the sources of the wavefield and Earth structure, that should be inverted jointly. The structure kernels depend on the sources of the wavefield and the processing scheme applied to the raw data. Therefore, both must be taken into account correctly in order to make accurate inferences on

Biosocial theory claims that evolution did not design human psychological sex differences. It argues that these are the result of the allocation of men and women into different sex roles, based on physical differences. This article argues, however, that biosocial theory is not an alternative to

In social groups where relatedness among interacting individuals is low, cooperation can often only be maintained through mechanisms that repress competition among group members. Repression-of-competition mechanisms, such as policing and punishment, seem to be of particular importance in human societies, where cooperative interactions often occur among unrelated individuals. In line with this view, economic games have shown that the ability to punish defectors enforces cooperation among humans. Here, I examine a real-world example of a repression-of-competition system, the police institutions common to modern human societies. Specifically, I test evolutionary policing theory by comparing data on policing effort, per capita crime rate, and similarity (used as a proxy for genetic relatedness) among citizens across the 26 cantons of Switzerland. This comparison revealed full support for all three predictions of evolutionary policing theory. First, when controlling for policing efforts, crime rate correlated negatively with the similarity among citizens. This is in line with the prediction that high similarity results in higher levels of cooperative self-restraint (i.e. lower crime rates) because it aligns the interests of individuals. Second, policing effort correlated negatively with the similarity among citizens, supporting the prediction that more policing is required to enforce cooperation in low-similarity societies, where individuals' interests diverge most. Third, increased policing efforts were associated with reductions in crime rates, indicating that policing indeed enforces cooperation. These analyses strongly indicate that humans respond to cues of their social environment and adjust cheating and policing behaviour as predicted by evolutionary policing theory. PMID:21909429

In social groups where relatedness among interacting individuals is low, cooperation can often only be maintained through mechanisms that repress competition among group members. Repression-of-competition mechanisms, such as policing and punishment, seem to be of particular importance in human societies, where cooperative interactions often occur among unrelated individuals. In line with this view, economic games have shown that the ability to punish defectors enforces cooperation among humans. Here, I examine a real-world example of a repression-of-competition system, the police institutions common to modern human societies. Specifically, I test evolutionary policing theory by comparing data on policing effort, per capita crime rate, and similarity (used as a proxy for genetic relatedness) among citizens across the 26 cantons of Switzerland. This comparison revealed full support for all three predictions of evolutionary policing theory. First, when controlling for policing efforts, crime rate correlated negatively with the similarity among citizens. This is in line with the prediction that high similarity results in higher levels of cooperative self-restraint (i.e. lower crime rates) because it aligns the interests of individuals. Second, policing effort correlated negatively with the similarity among citizens, supporting the prediction that more policing is required to enforce cooperation in low-similarity societies, where individuals' interests diverge most. Third, increased policing efforts were associated with reductions in crime rates, indicating that policing indeed enforces cooperation. These analyses strongly indicate that humans respond to cues of their social environment and adjust cheating and policing behaviour as predicted by evolutionary policing theory.

Full Text Available In social groups where relatedness among interacting individuals is low, cooperation can often only be maintained through mechanisms that repress competition among group members. Repression-of-competition mechanisms, such as policing and punishment, seem to be of particular importance in human societies, where cooperative interactions often occur among unrelated individuals. In line with this view, economic games have shown that the ability to punish defectors enforces cooperation among humans. Here, I examine a real-world example of a repression-of-competition system, the police institutions common to modern human societies. Specifically, I test evolutionary policing theory by comparing data on policing effort, per capita crime rate, and similarity (used as a proxy for genetic relatedness among citizens across the 26 cantons of Switzerland. This comparison revealed full support for all three predictions of evolutionary policing theory. First, when controlling for policing efforts, crime rate correlated negatively with the similarity among citizens. This is in line with the prediction that high similarity results in higher levels of cooperative self-restraint (i.e. lower crime rates because it aligns the interests of individuals. Second, policing effort correlated negatively with the similarity among citizens, supporting the prediction that more policing is required to enforce cooperation in low-similarity societies, where individuals' interests diverge most. Third, increased policing efforts were associated with reductions in crime rates, indicating that policing indeed enforces cooperation. These analyses strongly indicate that humans respond to cues of their social environment and adjust cheating and policing behaviour as predicted by evolutionary policing theory.

A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic 'J·dΣ' rule of quantum cosmology, as well as a generalization of this rule to generic initial states

What has happened to this resource now? This brief paper outlines how the developers of the reference resource have improved on the design and content of the medical database. Now the reference resource is an Internet-based resource called General Practice Notebook (www.gpnotebook.co.uk and is currently attracting 5000 to 9000 page views per day and containing over 30 000 index terms in a complex web structure of over 60 000 links. This paper describes the evolutionary process that has occurred over the last decade.

Conventional ecological models show that complexity destabilizes foodwebs, suggesting that foodwebs should have neither large numbers of species nor a large number of interactions. However, in nature the opposite appears to be the case. Here we show that if the interactions between species are allowed to evolve within a generalized Lotka-Volterra model such stabilizing feedbacks and weak interactions emerge automatically. Moreover, we show that trophic levels also emerge spontaneously from the evolutionary approach, and the efficiency of the unperturbed ecosystem increases with time. The key to stability in large foodwebs appears to arise not from complexity perse but from evolution at the level of the ecosystem which favors stabilizing (negative) feedbacks.

What might now be referred to as the "classical" evolutionary biological theory of why we age has had a number of serious challenges in recent years. While the theory might therefore have to be modified under certain circumstances, in the author's opinion, it still provides the soundest theoretical basis for thinking about how we age. Nine modalities of gene action that have the potential to modulate processes of aging are reviewed, including the two most widely reviewed and accepted concepts ("antagonistic pleiotropy" and "mutation accumulation"). While several of these nine mechanisms can be regarded as derivatives of the antagonistic pleiotropic concept, they frame more specific questions for future research. Such research should pursue what appears to be the dominant factor in the determination of intraspecific variations in longevity-stochastic mechanisms, most likely based upon epigenetics. This contrasts with the dominant factor in the determination of interspecific variations in longevity-the constitutional genome, most likely based upon variations in regulatory loci.

The metabolic theory of ecology stipulates that molecular evolutionary rates should correlate with temperature and latitude in ectothermic organisms. Previous studies have shown that most groups of vertebrates, such as amphibians, turtles and even endothermic mammals, have higher molecular evolutionary rates in regions where temperature is high. However, the association between molecular evolutionary rates and temperature or latitude has never been tested in Squamata. We used a large dataset including the spatial distributions and environmental variables for 1,651 species of Squamata and compared the contrast of the rates of molecular evolution with the contrast of temperature and latitude between sister species. Using major axis regressions and a new algorithm to choose independent sister species pairs, we found that temperature and absolute latitude were not associated with molecular evolutionary rates. This absence of association in such a diverse ectothermic group questions the mechanisms explaining current pattern of species diversity in Squamata and challenges the presupposed universality of the metabolic theory of ecology.

This study investigated the effects of different styles of social interaction on the learning of advanced biological knowledge. Recent research has increasingly acknowledged the importance of social interaction for promoting learning and cognitive development. However, there has been a controversy about the optimal style of interaction. Some studies have showed the beneficial effects of symmetrical interactions such as an argument between peers, whereas other studies have found the superiority of asymmetrical interactions in which a novice learn with the guidance of an expert. The reason for the contradictory results may be that different styles of interaction enhance different kinds of learning. The present study focused on the three styles of interaction; (1) Conflicting style, in which two novice students with scientifically wrong but conflicting views argue with one another, (2) Guiding style, in which a novice student is led by a more expert student to an understanding of scientifically appropriate knowledge, (3) Mutual Constructive style, in which an expert student and a novice student jointly solve a scientific problem on an equal footing. Sixty college students with non-biology-majors and 30 students with a biology major participated in this experiment to discuss an evolutionary problem in these three styles of interaction, with the former serving as novices and the latter as experts. Analyses of the Pre- and the Posttest performance and discussion processes in the Interaction session revealed the following. First, the Guiding style and the Mutual Constructive style enhanced the acquisition of the scientific evolutionary conceptual framework more effectively than the Conflicting style. However, some students in the Conflicting style also grasped the scientific evolutionary framework, and many students reconstructed their theories of evolution through discussion, even if the frameworks remained scientifically inappropriate. Second, the students who discussed

We investigate theories of gravitation, in which spacetime is non-Riemannian and the metric g/sub munu/ is nonsymmetric, for ghosts and tachyons, using a spin-projection operator formalism. Ghosts are removed not by gauge invariance but by a Lagrange multiplier W/sub μ/, which occurs due to the breaking of projective invariance in the theory. Unified theories based on a Lagrangian containing a term lambdag/sup munu/g/sub / are proved to contain ghosts or tachyons

In several papers, Hauer (1988, 1989, 2000a, 2000b, 2016) has argued that the level of safety built into roads is unpremeditated, i.e. not the result of decisions based on knowledge of the safety impacts of design standards. Hauer has pointed out that the development of knowledge about the level of safety built into roads has been slow and remains incomplete even today. Based on these observations, this paper asks whether evolutionarytheory can contribute to explaining the slow development of knowledge. A key proposition of evolutionarytheory is that knowledge is discovered through a process of learning-by-doing; it is not necessarily produced intentionally by means of research or development. An unintentional discovery of knowledge is treacherous as far as road safety is concerned, since an apparently effective safety treatment may simply be the result of regression-to-the-mean. The importance of regression-to-the-mean was not fully understood until about 1980, and a substantial part of what was regarded as known at that time may have been based on studies not controlling for regression-to-the-mean. An attempt to provide an axiomatic foundation for designing a safe road system was made by Gunnarsson and Lindström (1970). This had the ambition of providing universal guidelines that would facilitate a preventive approach, rather than the reactive approach based on accident history (i.e. designing a system known to be safe, rather than reacting to events in a system of unknown safety). Three facts are notable about these principles. First, they are stated in very general terms and do not address many of the details of road design or traffic control. Second, they are not based on experience showing their effectiveness. Third, they are partial and do not address the interaction between elements of the road traffic system, in particular road user adaptation to system design. Another notable fact consistent with evolutionarytheory, is that the safety margins built

Theories of natural language and concepts have been unable to model the flexibility, creativity, context-dependence, and emergence, exhibited by words, concepts and their combinations. The mathematical formalism of quantum theory has instead been successful in capturing these phenomena such as graded membership, situational meaning, composition of categories, and also more complex decision making situations, which cannot be modeled in traditional probabilistic approaches. We show how a formal quantum approach to concepts and their combinations can provide a powerful extension of prototype theory. We explain how prototypes can interfere in conceptual combinations as a consequence of their contextual interactions, and provide an illustration of this using an intuitive wave-like diagram. This quantum-conceptual approach gives new life to original prototype theory, without however making it a privileged concept theory, as we explain at the end of our paper. PMID:27065436

Full Text Available Theories of natural language and concepts have been unable to model the flexibility, creativity, context-dependence, and emergence, exhibited by words, concepts and their combinations. The mathematical formalism of quantum theory has instead been successful in capturing these phenomena such as graded membership, situational meaning, composition of categories, and also more complex decision making situations, which cannot be modeled in traditional probabilistic approaches. We show how a formal quantum approach to concepts and their combinations can provide a powerful extension of prototype theory. We explain how prototypes can interfere in conceptual combinations as a consequence of their contextual interactions, and provide an illustration of this using an intuitive wave-like diagram. This quantum-conceptual approach gives new life to original prototype theory, without however making it a privileged concept theory, as we explain at the end of our paper.

The so called unimodular theory of gravitation is compared with general relativity in the quadratic (Fierz-Pauli) regime, using a quite broad framework, and it is argued that quantum effects allow in principle to discriminate between both theories.

The relationships are discussed of the general relativity theory to other fields of today's physics. Recent results are reported of studies into gravitational radiation, relativistic astrophysics, cosmology and the quantum theory. (Z.M.)

The human hepatitis B virus causes acute and chronic hepatitis and is considered one of the most serious human health issues by the World Health Organization, causing thousands of deaths per year. There are similar viruses belonging to the Hepadnaviridae family that infect non-human primates and other mammals as well as some birds. The majority of non-human primate virus isolates were phylogenetically close to the human hepatitis B virus, but like the human genotypes, the origins of these viruses remain controversial. However, there is a possibility that human hepatitis B virus originated in primates. Knowing whether these viruses might be common to humans and primates is crucial in order to reduce the risk to humans. To review the existing knowledge about the evolutionary origins of viruses of the Hepadnaviridae family in primates. This review was done by reading several articles that provide information about the Hepadnaviridae virus family in non-human primates and humans and the possible origins and evolution of these viruses. The evolutionary origin of viruses of the Hepadnaviridae family in primates has been dated back to several thousand years; however, recent analyses of genomic fossils of avihepadnaviruses integrated into the genomes of several avian species have suggested a much older origin of this genus. Some hypotheses about the evolutionary origins of human hepatitis B virus have been debated since the '90s. One theory suggested a New World origin because of the phylogenetic co-segregation between some New World human hepatitis B virus genotypes F and H and woolly monkey human hepatitis B virus in basal sister-relationship to the Old World non-human primates and human hepatitis B virus variants. Another theory suggests an Old World origin of human hepatitis B virus, and that it would have been spread following prehistoric human migrations over 100,000 years ago. A third theory suggests a co-speciation of human hepatitis B virus in non-human primate

In view of the 70th anniversary of the discovery of the GeneralTheory of Relativity, an analysis was made of the special and generaltheories. The basic postulates, their consequences in the formulation of the theories, the main results, some aspects related to the experimental verification and its applications are presented, as are some elements of the mathematical formalism of the theories, to facilitate the logical interrelationships between its results and consequences. (author)

Full Text Available Nature shows as human beings live and grow inside social structures. This assumption allows us to explain and explore how it may shape most of our behaviours and choices, and why we are not just blindly driven by instincts: our decisions are based on more complex cognitive reasons, based on our connectedness on different spaces. Thus, human cooperation emerges from this complex nature of social network. Our paper, focusing on the evolutionary dynamics, is intended to explore how and why it happens, and what kind of impact is caused by homophily among people. We investigate the evolution of human cooperation using evolutionary game theory on multiplex. Multiplexity, as an extra dimension of analysis, allows us to unveil the hidden dynamics and observe non-trivial patterns within a population across network layers. More importantly, we find a striking role of homophily, as the higher the homophily between individuals, the quicker is the convergence towards cooperation in the social dilemma. The simulation results, conducted both macroscopically and microscopically across the network layers in the multiplex, show quantitatively the role of homophily in human cooperation.

compatible with the Shockley-Queisser limit and the classical diode theory. For organic solar cells, exciton binding energies are sufficiently high, so that purely bipolar models are no longer applicable. Instead, excitonic transport has to be included. Thus, the inclusion of exciton transport into the bipolar detailed balance model leads to a generalized detailed balance model that simulates solar cells with predominantly bipolar transport, with predominantly excitonic transport and with every combination of both. Due to low exciton diffusion lengths, organic solar cells are usually combined with a specific device geometry, the bulk heterojunction. In a bulk heterojunction device, the whole bulk of the absorber is made up of distributed heterojunctions, where the exciton is transferred to a bound pair at the interface, which is then split into free electron and hole. The assumption that exciton transport is only relevant towards the next heterointerface allows to develop also a version of the detailed balance model that is applicable to bulk heterojunction cells. The last variation of the detailed balance model includes the process of impact ionisation as a means to generate more than one exciton from a single high energy photon. The model for multiple exciton generating absorbers identifies possible bottlenecks as well as maximum efficiencies of future solar cells that use this concept. Another direct consequence of the principle of detailed balance is a reciprocity theorem between electroluminescence and solar cell quantum efficiency. The theoretical part of this thesis discusses the validity range of this reciprocity and checks for each version of the model, whether the relation between electroluminescence and quantum efficiency is still applicable. The main result shows that voltage dependent carrier collection as encountered in low mobility pin-junction devices leads to deviations from the reciprocity, while it still holds for most pn-junction solar cells. The

Although theories that examine direct links between behavior and brain remain incomplete, it is known that brain expansion significantly correlates with caloric and oxygen demands. Therefore, one of the principles governing evolutionary cognitive neuroscience is that cognitive abilities that require significant brain function (and/or structural support) must be accompanied by significant fitness benefit to offset the increased metabolic demands. One such capacity is self-awareness (SA), which (1) is found only in the greater apes and (2) remains unclear in terms of both cortical underpinning and possible fitness benefit. In the current experiment, transcranial magnetic stimulation (TMS) was applied to the prefrontal cortex during a spatial perspective-taking task involving self and other viewpoints. It was found that delivery of TMS to the right prefrontal region disrupted self-, but not other-, perspective. These data suggest that self-awareness may have evolved in concert with other right hemisphere cognitive abilities.

Full Text Available Students accepting evolution are likely to rely on science as cognitive authority (i.e. science textbooks and science teachers. In contrast, those not accepting are likely to rely on religion as cognitive authority (i.e. religious texts and religious leaders. A thematic analysis based on existing quantitative and qualitative studies has been carried out in order to propose a theoretical framework for a range of reasons contributing to students' acceptance and rejection of evolutionarytheory. This article urges that instruction of evolution is more than the matter of delivering scientific contents. It also deals with personal worldviews influenced by different forms of cognitive authority. It is therefore important to put more emphasis on developing students’ learning skills to critically evaluate which source of information is scientifically appropriate, with full respect to religious belief of individuals.

Conclusion: Some hypotheses about the evolutionary origins of human hepatitis B virus have been debated since the ‘90s. One theory suggested a New World origin because of the phylogenetic co-segregation between some New World human hepatitis B virus genotypes F and H and woolly monkey human hepatitis B virus in basal sister-relationship to the Old World non-human primates and human hepatitis B virus variants. Another theory suggests an Old World origin of human hepatitis B virus, and that it would have been spread following prehistoric human migrations over 100,000 years ago. A third theory suggests a co-speciation of human hepatitis B virus in non-human primate hosts because of the proximity between the phylogeny of Old and New World non-human primate and their human hepatitis B virus variants. The importance of further research, related to the subject in South American wild fauna, is paramount and highly relevant for understanding the origin of human hepatitis B virus.

Full Text Available Abstract Background The order retroviridae comprises viruses based on ribonucleic acids (RNA. Some, such as HIV and HTLV, are human pathogens. Newly emerged human retroviruses have zoonotic origins. As far as has been established, both repeated infections (themselves possibly responsible for the evolution of viral mutations (Vm and host adaptability (Ha; along with interplay between inhibitors and promoters of cell tropism, are needed to effect retroviral cross-species transmissions. However, the exact modus operadi of intertwine between these factors at molecular level remains to be established. Knowledge of such intertwine could lead to a better understanding of retrovirology and possibly other infectious processes. This study was conducted to derive the mathematical equation of a generaltheory of the origins of retroviruses. Methods and results On the basis of an arbitrarily non-Euclidian geometrical "thought experiment" involving the cross-species transmission of simian foamy virus (sfv from a non-primate species Xy to Homo sapiens (Hs, initially excluding all social factors, the following was derived. At the port of exit from Xy (where the species barrier, SB, is defined by the Index of Origin, IO, sfv shedding is (1 enhanced by two transmitting tensors (Tt, (i virus-specific immunity (VSI and (ii evolutionary defenses such as APOBEC, RNA interference pathways, and (when present expedited therapeutics (denoted e2D; and (2 opposed by the five accepting scalars (At: (a genomic integration hot spots, gIHS, (b nuclear envelope transit (NMt vectors, (c virus-specific cellular biochemistry, VSCB, (d virus-specific cellular receptor repertoire, VSCR, and (e pH-mediated cell membrane transit, (↓pH CMat. Assuming As and Tt to be independent variables, IO = Tt/As. The same forces acting in an opposing manner determine SB at the port of sfv entry (defined here by the Index of Entry, IE = As/Tt. Overall, If sfv encounters no unforeseen effects on

We discuss noncommutative gauge theory from the generalized geometry point of view. We argue that the equivalence between the commutative and semiclassically noncommutative DBI actions is naturally encoded in the generalized geometry of D-branes.

AN ANNOTATED BIBLIOGRAPHY OF 20 ITEMS AND A DISCUSSION OF ITS SIGNIFICANCE WAS PRESENTED TO DESCRIBE CURRENT UTILIZATION OF SUBJECT THEORIES IN THE CONSTRUCTION OF AN EDUCATIONAL THEORY. ALSO, A THEORY MODEL WAS USED TO DEMONSTRATE CONSTRUCTION OF A SCIENTIFIC EDUCATIONAL THEORY. THE THEORY MODEL INCORPORATED SET THEORY (S), INFORMATION THEORY…

This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

Economists traditionally see altruistic acts as irrational. However, in the Prisoner's Dilemma, a rational player can do worse than a moral player. The rules of the game imply that one cannot defend one's best interest if one tries to. Game theory has struggled to explain how an agent could have access to the strategically best outcome without behaving irrationally, but with little success. Can a complex systems approach do better?. Peter Danielson, using Evolutionary Game Theory, has avoided some of the assumptions of Game Theory by using a complexity approach to reframe the problem, and offers a solution of sorts. According to Danielson, the foundations of altruism are mechanisms of deterrence that rely on credible threat - we are nice for fear of retaliation. He is both right and wrong. It will be argued that utilitarian, consequentialist principles must have been at work to create the conditions for altruistic acts to be performed. It is wrong to expect, however, that the same reasons are the reasons for action. In order for a model of genuine altruism to be possible, an extra cog must be inserted in the mechanism of causality in order to distance moral action from its strategic advantages. If emotions fulfill this role, we can tell a story in which it is rational to act on altruistic motivations and materially advantageous to hold such motivations. Moral sentiments can be seen as a tool designed by evolution to help optimize cooperation in a social environment. The proposed account integrates the Humean theory of motivation with Robert Frank's commitment model and Aristotle's views on moral education, keeping an adequate story of how it can be in our material interest to be moral without having to renounce to the existence of genuine acts of altruism.

The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of…

Generalized Yang-Mills theory has a covariant derivative, which contains both vector and scalar gauge bosons. Based on this theory, we construct a strong interaction model by using the group U(4). By using this U(4) generalized Yang-Mills model, we also obtain a gauge potential solution, which can be used to explain the asymptotic behavior and color confinement.

The article claims that comic entertainment consists of five elements 1. priming of the comic events to come 2. some comic entertainment inputs that creates arousal 3. Entertainment-internal signals of the playful nature of the comic input 4. Appraisal processes in audience members that evaluate...... the input as 'not real but playful', 5. this leads to a change in hedonic tone, and arousal is combined with the release of endorphins (a morphine-based neurotransmitter) that makes the arousal pleasant. The theory of comic entertainment accords with the PECMA flow theory proposed in Grodal: Embodied...... Visions, because the evaluation: playful, not real, influences the muscular directness towards the world that drops. Comic entertainment is further linked to human bonding....

The integrative role that Evolutionarytheory plays within Biology is recognised by most scientific authors, as well as in governmental education policies, including Brazilian policies. However, teaching and learning evolution seems problematic in many countries, and Brazil is among those. Many factors may affect teachers' and students'…

In this study, a system dynamics (SD) model is developed to guide the subsidy policies to promote the diffusion of green supply chain management (GSCM) in China. The relationships of stakeholders such as government, enterprises and consumers are analyzed through evolutionary game theory. Finally...

This paper discusses the relationship between religion and science education in the light of the cognitive sciences. We challenge the popular view that science and religion are compatible, a view that suggests that learning and understanding evolutionarytheory has no effect on students' religious beliefs and vice versa. We develop a cognitive…

In spite of its success, Neo-Darwinism is faced with major conceptual barriers to further progress, deriving directly from its metaphysical foundations. Most importantly, neo-Darwinism fails to recognize a fundamental cause of evolutionary change, "niche construction". This failure restricts the generality of evolutionarytheory, and introduces inaccuracies. It also hinders the integration of evolutionary biology with neighbouring disciplines, including ecosystem ecology, developmental biology, and the human sciences. Ecology is forced to become a divided discipline, developmental biology is stubbornly difficult to reconcile with evolutionarytheory, and the majority of biologists and social scientists are still unhappy with evolutionary accounts of human behaviour. The incorporation of niche construction as both a cause and a product of evolution removes these disciplinary boundaries while greatly generalizing the explanatory power of evolutionarytheory.

The paper concerns Einstein's general relativity, wave mechanics and the quantization of Einstein's gravitation equations. The principle of equivalence and its association with both wave mechanics and quantum gravity, is discussed. (U.K.)

This study was designed to extend the generality of the Substrata-Factor Theory by two methods of investigation: (1) theoretically, to est"blish the validity of the hypothesis that an isomorphic relationship exists between the Substrata-Factor Theory and the General Open Systems Theory, and (2) experimentally, to disc"ver through a…

A spatial generalized Jacobi identity obeyed by the polarization-dependent factors of the vertices in a q q-bar - Wγ process is studied. The amplitude of a scattering gluon-gluon with five particles is worked out. By reorganizing this amplitude in analogy with an interaction process photon-pion, the non existence of the spatial generalized Jacobi identity, but instead many spatial partial identities that compose themselves, in the case of a four particle process, in one single identity is shown. A process with four particles, three of them scalar fields, but in the one loop approximation is studied. In this case also, the non existence of the spatial generalized Jacobi identity is demonstrated. (author)

An attempt is made to compare the solution of field equations, corresponding to quadratic equations for the fields (g μν , Γ μν α ) in gauge gravitation theory (GGT) with general relativity theory solutions. Without restrictions for a concrete type of metrics only solutions of equations, for which torsion turns to zero, are considered. Equivalence of vacuum equations of gauge quadratic theory of gravity and general relativity theory is proved using the Newman-Penrose formalism

The integrative role that Evolutionarytheory plays within Biology is recognised by most scientific authors, as well as in governmental education policies, including Brazilian policies. However, teaching and learning evolution seems problematic in many countries, and Brazil is among those. Many factors may affect teachers' and students' perceptions towards evolution, and studies can help to reveal those factors. We used a conceptual questionnaire, the Measure of Acceptance of the Theory of Evolution (MATE) instrument, and a Knowledge test to assess (1) the level of acceptance and understanding of 23 undergraduate Biology students nearing the end of their course, (2) other factors that could affect these levels, including course structure, and (3) the most difficult topics regarding evolutionary biology. The results of this study showed that the students, on average, had a 'Very High Acceptance' (89.91) and a 'Very Low Knowledge' (59.42%) of Evolutionarytheory, and also indicated a moderate positive correlation between the two (r = 0.66, p = .001). The most difficult topics were related to the definition of evolution and dating techniques. We believe that the present study provides evidence for policymakers to reformulate current school and university curricula in order to improve the teachers' acceptance and understanding of evolution and other biological concepts, consequently, helping students reduce their misconceptions related to evolutionary biology.

In this work we present a general mathematical framework to deal with Quantum Networks, i.e. networks resulting from the interconnection of elementary quantum circuits. The cornerstone of our approach is a generalization of the Choi isomorphism that allows one to efficiently represent any given Quantum Network in terms of a single positive operator. Our formalism allows one to face and solve many quantum information processing problems that would be hardly manageable otherwise, the most relevant of which are reviewed in this work: quantum process tomography, quantum cloning and learning of transformations, inversion of a unitary gate, information-disturbance tradeoff in estimating a unitary transformation, cloning and learning of a measurement device (Authors)

This review describes developments in theoretical particle physics in the 1950s which were important in the race to develop a putative generaltheory of quantized fields, especially ideas that offered a mathematically rigorous theory. Basic theoretical concepts then available included the Hamiltonian formulation of quantum dynamics, canonical quantization, perturbative renormalization theory and the theory of distributions. Following a description of various important theoretical contributions of this era, the review ends with a summary of the most important contributions of axiomatic field theory to concrete physics applications. (UK)

The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of controller and controlled system, as well as entropy and negentropy, are applied to the information flow and autopoietic behavior as they relate to the boundary-maintaining functions of today's organizations. The author's conclusions synthesize general system theory and human services values to lay the foundation for an information-sharing framework for human services in the 21st century.

It has been shown that generalized Einstein-Aether theories may lead to significant modifications to the nonrelativistic limit of the Einstein equations. In this paper we study the effect of a general class of such theories on the Solar System. We consider corrections to the gravitational potential in negative and positive powers of distance from the source. Using measurements of the perihelion shift of Mercury and time delay of radar signals to Cassini, we place constraints on these corrections. We find that a subclass of generalized Einstein-Aether theories is compatible with these constraints

A general sensitivity theory is developed for nonlinear lumped-parameter system simulations. The point-of-departure is general perturbation theory, which has long been used for linear systems in nuclear engineering and reactor physics. The theory allows the sensitivity of particular figures-of-merit of the system behavior to be calculated with respect to any parameter.An explicit procedure is derived for applying the theory to physical systems undergoing sudden events (e.g., reactor scrams, tank ruptures). A related problem, treating figures-of-merit defined as functions of extremal values of system variables occurring at sudden events, is handled by the same procedure. The general calculational scheme for applying the theory to numerical codes is discussed. It is shown that codes which use pre-packaged implicit integration subroutines can be augmented to include sensitivity theory: a companion set of subroutines to solve the sensitivity problem is listed. This combined system analysis code is applied to a simple model for loss of post-accident heat removal in a liquid metal-cooled fast breeder reactor. The uses of the theory for answering more general sensitivity questions are discussed. One application of the theory is to systematically determine whether specific physical processes in a model contribute significantly to the figures-of-merit. Another application of the theory is for selecting parameter values which enable a model to match experimentally observed behavior

We dicusss the meaning and prove the accordance of general relativity, wave mechanics, and the quantization of Einstein's gravitation equations themselves. Firstly, we have the problem of the influence of gravitational fields on the de Broglie waves, which influence is in accordance with Einstein's weak principle of equivalence and the limitation of measurements given by Heisenberg's uncertainty relations. Secondly, the quantization of the gravitational fields is a ''quantization of geometry.'' However, classical and quantum gravitation have the same physical meaning according to limitations of measurements given by Einstein's strong principle of equivalence and the Heisenberg uncertainties for the mechanics of test bodies

In this paper the systematic study of the generalized Bessel functions (GBF), recently introduced and often encountered in problems of scattering for which the dipole approximation is inadequate, is continuated. The relations among different GBF are analysed and their importance for the solution of differential finite-difference equation of the Raman-Nath type is discussed. Numerical results for the first-kind cylinder GBF in the preasymptotic region and also a preliminary analysis of the asymptotic properties of the modified GBF are presented

This book is prepared for M. Sc. Students of Mathematics and Physics. The aim of writing this book is to give the reader a feeling for the necessity and beauty of the laws of general relativity. The contents of the book will attract both mathematicians and physicists which provides motivation and applications of many ideas and powerful mathematical methods of modern analysis and differential geometry. An attempt has been made to make the presentation comprehensive, rigorous and yet simple. Most calculations and transformations have been carried out in great detail. KEY FEATURE: Numerous solved examples using the well known mathematical techniques viz., the tensors and the differential forms in each chapter.

Metabolite exchanges in microbial communities give rise to ecological interactions that govern ecosystem diversity and stability. It is unclear, however, how the rise of these interactions varies across metabolites and organisms. Here we address this question by integrating genome-scale models of metabolism with evolutionary game theory. Specifically, we use microbial fitness values estimated by metabolic models to infer evolutionarily stable interactions in multi-species microbial "games". We first validate our approach using a well-characterized yeast cheater-cooperator system. We next perform over 80,000 in silico experiments to infer how metabolic interdependencies mediated by amino acid leakage in Escherichia coli vary across 189 amino acid pairs. While most pairs display shared patterns of inter-species interactions, multiple deviations are caused by pleiotropy and epistasis in metabolism. Furthermore, simulated invasion experiments reveal possible paths to obligate cross-feeding. Our study provides genomically driven insight into the rise of ecological interactions, with implications for microbiome research and synthetic ecology.

There is little doubt that family factors can influence involvement in delinquency, although the full nature and extent of their influences remain unclear. In recent decades, testosterone has been increasingly implicated as a contributor to adolescent offending. The present study sought to determine whether two important types of familial factors--parental socioeconomic status and amicable parent-child relationships--are interacting with testosterone (and possibly other androgens) to affect delinquency. A large sample of North American college students self-reported their involvement in eight categories of delinquency along with self-ratings of various androgen-promoted traits (e.g., muscularity and low-deep voice), parental social status, and the quality of the relationships they had with parents. In both sexes, parent-child relationships and androgens were significantly associated with delinquency but parental social status was not. Factor analysis revealed that the authors' measures of all four categories of variables exhibited strong loadings onto their respective factors. Androgens and amicable parent-child relationships were associated with delinquency but parental social status was not. About one third of the influence of parent-child relationships on delinquency appeared to be attributable to androgens. Findings are discussed from the perspective of the evolutionary neuroandrogenic theory of delinquent and criminal behavior.

Evolutionary game theory has shown that human cooperation thrives in different types of social interactions with a PD structure. Models treat the cooperative strategies within the different frameworks as discrete entities and sometimes even as contenders. Whereas strong reciprocity was acclaimed as superior to classic reciprocity for its ability to defeat defectors in public goods games, recent experiments and simulations show that costly punishment fails to promote cooperation in the IR and DR games, where classic reciprocity succeeds. My aim is to show that cooperative strategies across frameworks are capable of a unified treatment, for they are governed by a common underlying rule or norm. An analysis of the reputation and action rules that govern some representative cooperative strategies both in models and in economic experiments confirms that the different frameworks share a conditional action rule and several reputation rules. The common conditional rule contains an option between costly punishment and withholding benefits that provides alternative enforcement methods against defectors. Depending on the framework, individuals can switch to the appropriate strategy and method of enforcement. The stability of human cooperation looks more promising if one mechanism controls successful strategies across frameworks. Published by Elsevier Ltd.

The two papers in this document focus on general systems theory. In her paper, Linda Lederman discusses the emergence and evolution of general systems theory, defines its central concepts, and draws some conclusions regarding the nature of the theory and its value as an epistemology. Don Rogers, in his paper, relates some of the important features…

We consider the nonrelativistic N-body scattering problem for a system of particles in which some subsets of the particles are identical. We demonstrate how the particle identity can be included in a general class of linear integral equations for scattering operators or components of scattering operators. The Yakubovskii, Yakubovskii--Narodestkii, Rosenberg, and Bencze--Redish--Sloan equations are included in this class. Algebraic methods are used which rely on the properties of the symmetry group of the system. Operators depending only on physically distinguishable labels are introduced and linear integral equations for them are derived. This procedure maximally reduces the number of coupled equations while retaining the connectivity properties of the original equations

In this paper a previous letter (where, among other things, a classical ''quark confinement'' was derived from general relativity plus dilatation-covariance), is completed by showing that the theory is compatible also with quarks ''asymptotic freedom''. Then -within a bi-scale theory of gravitational and strong interactions- a classical field theory is proposed for the (strong) interactions between hadrons. Various consequences are briefly analysed

There is considered an extension of gauge theories according to the assumption of a generalized uncertainty principle which implies a minimal length scale. A modification of the usual uncertainty principle implies an extended shape of matter field equations like the Dirac equation. If there is postulated invariance of such a generalized field equation under local gauge transformations, the usual covariant derivative containing the gauge potential has to be replaced by a generalized covariant derivative. This leads to a generalized interaction between the matter field and the gauge field as well as to an additional self-interaction of the gauge field. Since the existence of a minimal length scale seems to be a necessary assumption of any consistent quantum theory of gravity, the gauge principle is a constitutive ingredient of the standard model, and even gravity can be described as gauge theory of local translations or Lorentz transformations, the presented extension of gauge theories appears as a very important consideration.

In this paper the relationship of DSR theories and Conformal Group is reviewed. On the other hand, the relation between DSR Magueijo Smolin generators and generalized commutation relations is also shown

Full Text Available This paper explains theoretical approach of the four theories of General system Theory (GST developed by Yourdon (1989 [Yourdon, E. (1989. Modern Structured Analysis. Yourdon Press, Prentice-Hall International, Englewood Cliffs, New Jersey. Senge] while applying it in information technology and subsequently used by caddy (2007 [Caddy I.N., & Helou, M.M. (2007. Supply chains and their management: Application of general systems theory. Journal of Retailing and Consumer Services, 14, 319–327.] in field of supply chain and management. JIT philosophy in core activities of supply chain i.e. procurement, production processes, and logistics are discussed through general system theory. The growing structure of the supply chain poses the implication restrictions and requires a heavy support system, many times a compromise is done while implementing JIT. The study would be useful to understand the general trends generated naturally regarding the adoption of the JIT philosophy in the supply chain.

A generalizedtheory of chromatography and multistep liquid extraction is developed. The principles of highly efficient processes for fine preparative separation of binary mixture components on a fixed sorbent layer are discussed.

Absolute Spacetime Theories conceived for the purpose of testing Special Relativity (SR) are reviewed. It is found that most theories proposed were in fact SR in different coordinate systems, since in general no specific SR violations were introduced. Models based on possible SR violating mechanisms are considered. Misconceptions in recently published papers are examined. (author) [pt

Considers the limitations of General Systems Theory (GST) as a major paradigm within administrative theory and concludes that most systems formulations overemphasize growth and show little appreciation for intraorganizational conflict, diversity of values, and political action within organizations. Suggests that these limitations are mainly due to…

An introduction to general systems theory and an overview of vocabulary and concepts are presented to introduce school business officials to systems thinking and to foster its use as an analytical tool. The theory is then used to analyze a sample problem: planning changes to a district's administrative computer system. (eight references) (MLF)

The most general covariant action describing gravity coupled to a scalar field with only second order equations of motion, Horndeski's theory (also known as ''Generalized Galileons''), provides an all-encompassing model in which single scalar dark energy models may be constrained. However, the generality of the model makes it cumbersome to manipulate. In this paper, we demonstrate that when considering linear perturbations about a Friedmann-Robertson-Walker background, the theory is completely specified by only six functions of time, two of which are constrained by the background evolution. We utilise the ideas of the Effective Field Theory of Inflation/Dark Energy to explicitly construct these six functions of time in terms of the free functions appearing in Horndeski's theory. These results are used to investigate the behavior of the theory in the quasistatic approximation. We find that only four functions of time are required to completely specify the linear behavior of the theory in this limit, which can further be reduced if the background evolution is fixed. This presents a significantly reduced parameter space from the original presentation of Horndeski's theory, giving hope to the possibility of constraining the parameter space. This work provides a cross-check for previous work on linear perturbations in this theory, and also generalizes it to include spatial curvature

We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

The development of general strain theory (GST) has led to a renewed focus on the influence of negative life experiences on antisocial behavior. Although a number of studies have generated an impressive array of support for the theory, several avenues remain open for research. In this article, we examine how a specific noxious stimuli, peer…

There are diverse mechanisms driving the evolution of social networks. A key open question dealing with understanding their evolution is: How do various preferential linking mechanisms produce networks with different features? In this paper we first empirically study preferential linking phenomena in an evolving online social network, find and validate the linear preference. We propose an analyzable model which captures the real growth process of the network and reveals the underlying mechanism dominating its evolution. Furthermore based on preferential linking we propose a generalized model reproducing the evolution of online social networks, and present unified analytical results describing network characteristics for 27 preference scenarios. We study the mathematical structure of degree distributions and find that within the framework of preferential linking analytical degree distributions can only be the combinations of finite kinds of functions which are related to rational, logarithmic and inverse tangent functions, and extremely complex network structure will emerge even for very simple sublinear preferential linking. This work not only provides a verifiable origin for the emergence of various network characteristics in social networks, but bridges the micro individuals' behaviors and the global organization of social networks.

A theory for lifting equations of motion for charged particle dynamics, subject to given electromagnetic like forces, up to a gauge-free system of coupled Hamiltonian Vlasov-Maxwell like equations is given. The theory provides very general expressions for the polarization and magnetization vector fields in terms of the particle dynamics description of matter. Thus, as is common in plasma physics, the particle dynamics replaces conventional constitutive relations for matter. Several examples are considered including the usual Vlasov-Maxwell theory, a guiding center kinetic theory, Vlasov-Maxwell theory with the inclusion of spin, and a Vlasov-Maxwell theory with the inclusion of Dirac's magnetic monopoles. All are shown to be Hamiltonian field theories and the Jacobi identity is proven directly.

It is shown that even in the absence of the equivalence principle, the Newtonian theory of gravitation can be given a geometric form in a five-dimensional manifold. The fifth dimension is taken as the ratio of gravitational and inertial mass, which is allowed to be different for different particles. The resulting pondoromotive and field equations in this 5-dimensional space (which are generalizations of Cartan's formulation of Newtonian gravitation) are formulated and their consequences are discussed. It is argued that as general relativity is a 'metric' theory, a similar generalization of general relativity is not possible. (author)

From the perspective of an analyst, like myself, the GeneralTheory of Relativity provides an extrordinary rich and vastly virgin territory. It is the aim of my lecture to provide, ﬁrst, an account of those aspects of the theory which attract me most and second a perspective of what has been accomplished so far in that respect. In trying to state our main objectives it helps to view General Relativity in the broader context of Classical Field Theory. EinsteiniVacuum equations, or shortly E—V, is already sufﬁciently complicated. I will thus restrict my attention to them.

A generalized perturbation theory (GPT) formulation is developed for application to light water reactor (LWR) design. The extensions made to standard generalized perturbation theory are the treatment of thermal-hydraulic and fission product poisoning feedbacks, and criticality reset. This formulation has been implemented into a standard LWR design code. The method is verified by comparing direct calculations with GPT calculations. Data are presented showing that feedback effects need to be considered when using GPT for LWR problems. Some specific potential applications of this theory to the field of LWR design are discussed

The recent efforts to define a basis for radioprotection of the environment include some concepts and ideas related to various endpoints which need a clarification. This paper focuses on the biodiversity concept and the context of individuals of a species as well as that of the species as a gene pool. A major problem with the ambition to radioprotect biodiversity is the concept 'reference organism' which has no genetic properties and therefore is in contradiction with a real biological species. Biodiversity and the species (gene pool) concept are, just as any other areas of biology, integral parts of evolutionarytheory. With the reference organism as a basis no meaningful reasoning can take place which relates data on radioactivity levels or mutations to potential effects on populations or biodiversity. It is therefore suggested that the national and international bodies involved in radioprotection of the environment take advantage of evolutionarytheory as a reference frame. (author)

Full Text Available The Traveler's Dilemma game and the Minimum Effort Coordination game are two social dilemmas that have attracted considerable attention due to the fact that the predictions of classical game theory are at odds with the results found when the games are studied experimentally. Moreover, a direct application of deterministic evolutionary game theory, as embodied in the replicator dynamics, to these games does not explain the observed behavior. In this work, we formulate natural variants of these two games as smoothed continuous-strategy games. We study the evolutionary dynamics of these continuous-strategy games, both analytically and through agent-based simulations, and show that the behavior predicted theoretically is in accord with that observed experimentally. Thus, these variants of the Traveler's Dilemma and the Minimum Effort Coordination games provide a simple resolution of the paradoxical behavior associated with the original games.

The Traveler's Dilemma game and the Minimum Effort Coordination game are two social dilemmas that have attracted considerable attention due to the fact that the predictions of classical game theory are at odds with the results found when the games are studied experimentally. Moreover, a direct application of deterministic evolutionary game theory, as embodied in the replicator dynamics, to these games does not explain the observed behavior. In this work, we formulate natural variants of these two games as smoothed continuous-strategy games. We study the evolutionary dynamics of these continuous-strategy games, both analytically and through agent-based simulations, and show that the behavior predicted theoretically is in accord with that observed experimentally. Thus, these variants of the Traveler's Dilemma and the Minimum Effort Coordination games provide a simple resolution of the paradoxical behavior associated with the original games.

The Traveler's Dilemma game and the Minimum Effort Coordination game are two social dilemmas that have attracted considerable attention due to the fact that the predictions of classical game theory are at odds with the results found when the games are studied experimentally. Moreover, a direct application of deterministic evolutionary game theory, as embodied in the replicator dynamics, to these games does not explain the observed behavior. In this work, we formulate natural variants of these two games as smoothed continuous-strategy games. We study the evolutionary dynamics of these continuous-strategy games, both analytically and through agent-based simulations, and show that the behavior predicted theoretically is in accord with that observed experimentally. Thus, these variants of the Traveler's Dilemma and the Minimum Effort Coordination games provide a simple resolution of the paradoxical behavior associated with the original games. PMID:24709851

This study examines whether people use the general implicit theories of creativity or not when applying them to themselves and others. On the basis of the actor-observer asymmetry theory, the authors propose that conception of creativity would be differently constructed depending on the targets of attention: general, self, and other. Three studies…

J.F. Colombeau's generalized functions are constructed as equivalence classes of the elements of a specially chosen ultrapower of the class of the C ∞ -functions. The elements of this ultrapower are considered as sequences of C ∞ -functions, so in a sense, the sequential construction presented here refers to the original Colombeau theory just as, for example, the Mikusinski sequential approach to the distribution theory refers to the original Schwartz theory of distributions. The paper could be used as an elementary introduction to the Colombeau theory in which recently a solution was found to the problem of multiplication of Schwartz distributions. (author). Refs

The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M{sub s}, G{sub s}) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs.

The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M s , G s ) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs

We present a generalized test theory of special relativity, using a noninertial frame. Within the framework of the special theory of relativity the transport- and Einstein-synchronizations are equivalent on a rigidly rotating disk. But in any theory with a preferred frame such an equivalence does not hold. The time difference resulting from the two synchronization procedures is a measurable quantity within the reach of existing clock systems on the earth. The final result contains a term which depends on the angular velocity of the rotating system, and hence measures an absolute effect. This term is of crucial importance in our test theory of the special relativity. (author). 13 refs

Khan's theory (Nuovo Cimento; 57B:321 (1968) and Int. J. Theor. Phys.; 6:383 (1972)) of reciprocity has been shown to be equivalent to the theory of general relativity (in a conformally flat space-time) in that the same predictions are made physically. It is is proved that, since 'centrifugal forces' are used by Khan, gravitational phenomena are being considered equal in status to electromagnetic phenomena, and hence the difference claimed to exist between Milne's theory and Khan's theory disappears. (author)

Using the cosmological perturbation theory in terms of the δN formalism, we find the simple formulation of the evolution of the curvature perturbation in generalized gravity theories. Compared with the standard gravity theory, a crucial difference appears in the end-boundary of the inflationary stage, which is due to the non-ideal form of the energy-momentum tensor that depends explicitly on the curvature scalar. Recent study shows that ultraviolet-complete quantum theory of gravity (Horava-Lifshitz gravity) can be approximated by using a generalized gravity action. Our paper may give an important step in understanding the evolution of the curvature perturbation during inflation, where the energy-momentum tensor may not be given by the ideal form due to the corrections from the fundamental theory.

The model of the world proposed by Whitehead provides a natural theoretical framework in which to imbed quantum theory. This model accords with the ontological ideas of Heisenberg, and also with Einstein's view that physical theories should refer nominally to the objective physical situation, rather than our knowledge of that system. Whitehead imposed on his model the relativistic requirement that what happens in any given spacetime region be determined only by what has happened in its absolute past, i.e., in the backward light-cone drawn from that region. This requirement must be modified, for it is inconsistent with the implications of quantum theory expressed by a generalized version of Bell's theorem. Revamping the causal spacetime structure of the Whitehead-Heisenberg ontology to bring it into accord with the generalized Bell's theorem creates the possibility of a nonlocal causal covariant theory that accords with the statistical prediction of quantum theory

Progressive kidney disease follows nephron loss, hyperfiltration, and incomplete repair, a process described as "maladaptive." In the past 20 years, a new discipline has emerged that expands research horizons: evolutionary medicine. In contrast to physiologic (homeostatic) adaptation, evolutionary adaptation is the result of reproductive success that reflects natural selection. Evolutionary explanations for physiologically maladaptive responses can emerge from mismatch of the phenotype with environment or evolutionary tradeoffs. Evolutionary adaptation to a terrestrial environment resulted in a vulnerable energy-consuming renal tubule and a hypoxic, hyperosmolar microenvironment. Natural selection favors successful energy investment strategy: energy is allocated to maintenance of nephron integrity through reproductive years, but this declines with increasing senescence after ~40 years of age. Risk factors for chronic kidney disease include restricted fetal growth or preterm birth (life history tradeoff resulting in fewer nephrons), evolutionary selection for APOL1 mutations (that provide resistance to trypanosome infection, a tradeoff), and modern life experience (Western diet mismatch leading to diabetes and hypertension). Current advances in genomics, epigenetics, and developmental biology have revealed proximate causes of kidney disease, but attempts to slow kidney disease remain elusive. Evolutionary medicine provides a complementary approach by addressing ultimate causes of kidney disease. Marked variation in nephron number at birth, nephron heterogeneity, and changing susceptibility to kidney injury throughout life history are the result of evolutionary processes. Combined application of molecular genetics, evolutionary developmental biology (evo-devo), developmental programming and life history theory may yield new strategies for prevention and treatment of chronic kidney disease.

Immunization programs have often been impeded by vaccine scares, as evidenced by the measles-mumps-rubella (MMR) autism vaccine scare in Britain. A "free rider" effect may be partly responsible: vaccine-generated herd immunity can reduce disease incidence to such low levels that real or imagined vaccine risks appear large in comparison, causing individuals to cease vaccinating. This implies a feedback loop between disease prevalence and strategic individual vaccinating behavior. Here, we analyze a model based on evolutionary game theory that captures this feedback in the context of vaccine scares, and that also includes social learning. Vaccine risk perception evolves over time according to an exogenously imposed curve. We test the model against vaccine coverage data and disease incidence data from two vaccine scares in England & Wales: the whole cell pertussis vaccine scare and the MMR vaccine scare. The model fits vaccine coverage data from both vaccine scares relatively well. Moreover, the model can explain the vaccine coverage data more parsimoniously than most competing models without social learning and/or feedback (hence, adding social learning and feedback to a vaccine scare model improves model fit with little or no parsimony penalty). Under some circumstances, the model can predict future vaccine coverage and disease incidence--up to 10 years in advance in the case of pertussis--including specific qualitative features of the dynamics, such as future incidence peaks and undulations in vaccine coverage due to the population's response to changing disease incidence. Vaccine scares could become more common as eradication goals are approached for more vaccine-preventable diseases. Such models could help us predict how vaccine scares might unfold and assist mitigation efforts.

The source strengths of the Euler-Lagrange equations, for a system of interacting fields, are heuristically interpreted as generalized forces. The canonical form of the energy-momentum tensor thus consistently appears, without recourse to space-time symmetry arguments. A concept of 'conservative' generalized force in classical field theory is also briefly discussed.

To understand the evolution of general intelligence, Burkart et al. endorse a "cultural intelligence approach," which emphasizes the critical importance of social interaction. We argue that theory of mind provides an essential foundation and shared perspective for the efficient ontogenetic transmission of crucial knowledge and skills during human development and, together with language, can account for superior human general intelligence.

Main principles of relativity and gravitation theories are deeply analyzed. Problems of boundaries of applicability for these theories and possible ways of their change and generalization are discussed. It is shown that the notion of general relativity does not introduce any post-newton physics - it only deals with coordinate transformations. It is supposed that ''general relativity'' is a physically senseless phrase which can be considered only as a historical remainder of an interesting philosophic discourse. The paper reveals that there exists appropriate physical substantiation of the Einstein gravitation theory not including a physically senseless concept of general relativity and promoting its fundamental relations with the experiment

General relativity cannot describe exchange of classical intrinsic angular momentum and orbital angular momentum. Einstein-Cartan theory fixes this problem in the least invasive way. In the late 20th century, the consensus view was that Einstein-Cartan theory requires inclusion of torsion without adequate justification, it has no empirical support (though it doesn't conflict with any known evidence), it solves no important problem, and it complicates gravitational theory with no compensating benefit. In 1986 the author published a derivation of Einstein-Cartan theory from general relativity, with no additional assumptions or parameters. Starting without torsion, Poincaré symmetry, classical or quantum spin, or spinors, it derives torsion and its relation to spin from a continuum limit of general relativistic solutions. The present work makes the case that this computation, combined with supporting arguments, constitutes a derivation of Einstein-Cartan theory from general relativity, not just a plausibility argument. This paper adds more and simpler explanations, more computational details, correction of a factor of 2, discussion of limitations of the derivation, and discussion of some areas of gravitational research where Einstein-Cartan theory is relevant.

The specific content of the present thesis is presented in the following way. First the most important contents of quantum theory and general relativity theory are presented. In connection with the general relativity theory the mathematical property of the diffeomorphism invariance plays the deciding role, while concerning the quantum theory starting from the Copenhagen interpretation first the measurement problem is treated, before basing on the analysis of concrete phenomena and the mathematical apparatus of quantum theory the nonlocality is brought into focus as an important property. This means that both theories suggest a relationalistic view of the nature of the space. This analysis of the theoretical foundations of quantum theory and general relativity theory in relation to the nature of the space obtains only under inclusion of Kant's philosophy and his analysis of the terms space and time as fundamental forms of perception its full persuasive power. Then von Weizsaeckers quantum theory of the ur-alternatives is presented. Finally attempts are made to apply the obtained knowledge to the question of the quantum-theoretical formulation of general relativity theory.

Since the mid 1970s, cancer has been described as a process of Darwinian evolution, with somatic cellular selection and evolution being the fundamental processes leading to malignancy and its many manifestations (neoangiogenesis, evasion of the immune system, metastasis, and resistance to therapies). Historically, little attention has been placed on applications of evolutionary biology to understanding and controlling neoplastic progression and to prevent therapeutic failures. This is now beginning to change, and there is a growing international interest in the interface between cancer and evolutionary biology. The objective of this introduction is first to describe the basic ideas and concepts linking evolutionary biology to cancer. We then present four major fronts where the evolutionary perspective is most developed, namely laboratory and clinical models, mathematical models, databases, and techniques and assays. Finally, we discuss several of the most promising challenges and future prospects in this interdisciplinary research direction in the war against cancer.

Strong reciprocity and other forms of cooperation with non-kin in large groups and in one-time social interactions is difficult to explain with traditional economic or with simple evolutionary accounts...

We derive the general counting rules for a quantum effective field theory (EFT) in $\\mathsf{d}$ dimensions. The rules are valid for strongly and weakly coupled theories, and predict that all kinetic energy terms are canonically normalized. They determine the energy dependence of scattering cross sections in the range of validity of the EFT expansion. The size of cross sections is controlled by the $\\Lambda$ power counting of EFT, not by chiral counting, even for chiral perturbation theory ($\\chi$PT). The relation between $\\Lambda$ and $f$ is generalized to $\\mathsf{d}$ dimensions. We show that the naive dimensional analysis $4\\pi$ counting is related to $\\hbar$ counting. The EFT counting rules are applied to $\\chi$PT, to Standard Model EFT and to the non-trivial case of Higgs EFT, which combines the $\\Lambda$ and chiral counting rules within a single theory.

This book lays out a new, generaltheory of light propagation and imaging through Earth’s turbulent atmosphere. Current theory is based on the – now widely doubted – assumption of Kolmogorov turbulence. The new theory is based on a generalized atmosphere, the turbulence characteristics of which can be established, as needed, from readily measurable properties of point-object, or star, images. The pessimistic resolution predictions of Kolmogorov theory led to lax optical tolerance prescriptions for large ground-based astronomical telescopes which were widely adhered to in the 1970s and 1980s. Around 1990, however, it became clear that much better resolution was actually possible, and Kolmogorov tolerance prescriptions were promptly abandoned. Most large telescopes built before 1990 have had their optics upgraded (e.g., the UKIRT instrument) and now achieve, without adaptive optics (AO), almost an order of magnitude better resolution than before. As well as providing a more comprehensive and precise under...

Convective circulations and vortices are common features of atmospheres that absorb low-entropy-energy at higher temperatures than they reject high-entropy-energy to space. These circulations range from small to planetary-scale and play an important role in the vertical transport of heat, momentum, and tracer species. Thus, the development of theoretical models for convective phenomena is important to our understanding of many basic features of planetary atmospheres. A thermodynamically generaltheory for convective circulations and vortices is proposed. The theory includes irreversible processes and quantifies the pressure drop between the environment and any point in a convective updraft. The article's main result is that the proposed theory provides an expression for the pressure drop along streamlines or streamtubes that is a generalization of Bernoulli's equation to convective circulations. We speculate that the proposed theory not only explains the intensity, but also shed light on other basic features of convective circulations and vortices.

We analyze the moments of the isosinglet generalized parton distributions H, E, H, E of the nucleon in one-loop order of heavy-baryon chiral perturbation theory. We discuss in detail the construction of the operators in the effective theory that are required to obtain all corrections to a given order in the chiral power counting. The results will serve to improve the extrapolation of lattice results to the chiral limit. (orig.)

In the presence of loss and gain, the coupled mode equation on describing the mode hybridization of various waveguides or cavities, or cavities coupled to waveguides becomes intrinsically non-Hermitian. In such non-Hermitian waveguides, the standard coupled mode theory fails. We generalize the coupled mode theory with a properly defined inner product based on reaction conservation. We apply our theory to the non-Hermitian parity-time symmetric waveguides, and obtain excellent agreement with results obtained by finite element fullwave simulations. The theory presented here is typically formulated in space to study coupling between waveguides, which can be transformed into time domain by proper reformulation to study coupling between non-Hermitian resonators. Our theory has the strength of studying non-Hermitian optical systems with inclusion of the full vector fields, thus is useful to study and design non-Hermitian devices that support asymmetric and even nonreciprocal light propagations.

Quantum field theory of Einstein's general relativity is formulated in the indefinitemetric Hilbert space in such a way that asymptotic fields are manifestly Lorentz covariant and the physical S-matrix is unitary. The general coordinate transformation is transcribed into a q-number transformation, called the BRS transformation. Its abstract definition is presented on the basis of the BRS transformation for the Yang-Mills theory. The BRS transformation for general relativity is then explicitly constructed. The gauge-fixing Lagrangian density and the Faddeev-Popov one are introduced in such a way that their sum behaves like a scalar density under the BRS transformation. One can then proceed in the same way as in the Kugo-Ojima formalism of the Yang-Mills theory to establish the unitarity of the physical S-matrix. (author)

If one accepts Einstein's general principle of relativity (covariance principle) also for the sphere of microphysics (quantum, mechanics, quantum field theory, theory of elemtary particles), one has to ask how far the fundamental laws of traditional quantum physics fulfil this principle. Attention is here drawn to a series of papers that have appeared during the last years, in which the author criticized the usual scheme of quantum theory (Heisenberg picture, Schroedinger picture etc.) and presented a new foundation of the basic laws of quantum physics, obeying the 'principle of fundamental covariance' (Einstein's covariance principle in space-time and covariance principle in Hilbert space of quantum operators and states). (author)

How can we relate the constraint structure and constraint dynamics of the general gauge theory in the Hamiltonian formulation to specific features of the theory in the Lagrangian formulation, especially relate the constraint structure to the gauge transformation structure of the Lagrangian action? How can we construct the general expression for the gauge charge if the constraint structure in the Hamiltonian formulation is known? Whether we can identify the physical functions defined as commuting with first-class constraints in the Hamiltonian formulation and the physical functions defined as gauge invariant functions in the Lagrangian formulation? The aim of the present paper is to consider the general quadratic gauge theory and to answer the above questions for such a theory in terms of strict assertions. To fulfil such a programme, we demonstrate the existence of the so-called superspecial phase-space variables in terms of which the quadratic Hamiltonian action takes a simple canonical form. On the basis of such a representation, we analyse a functional arbitrariness in the solutions of the equations of motion of the quadratic gauge theory and derive the general structure of symmetries by analysing a symmetry equation. We then use these results to identify the two definitions of physical functions and thus prove the Dirac conjecture.

The topic of this thesis is compactifications in string theory and supergravity. We study dimensional reductions of type II theories on backgrounds with fluxes, using the techniques of Hitchin's generalized geometry. We start with an introduction of the needed mathematical tools, focusing on SU(3)xSU(3) structures on the generalized tangent bundle T+T * , and analyzing their deformations. Next we study the four dimensional N equals 2 gauged supergravity which can be defined reducing type II theories on SU(3)*SU(3) structure backgrounds with general NSNS and RR fluxes: we establish the complete bosonic action, and we show how its data are related to the generalized geometry formalism on T+T * . In particular, we derive a geometric expression for the full N = 2 scalar potential. Then we focus on the relations between the 10d and 4d descriptions of supersymmetric flux backgrounds: we spell out the N = 1 vacuum conditions within the 4d N = 2 theory, as well as from its N = 1 truncation, and we establish a precise matching with the equations characterizing the N = 1 backgrounds at the ten dimensional level. We conclude by presenting some concrete examples, based on coset spaces with SU(3) structure. We establish for these spaces the consistency of the truncation based on left-invariance, and we explore the landscape of vacua of the corresponding theory, taking string loop corrections into account. (author)

In the frame of quantum field theory, instead of using the action principle, we deduce the Einstein equation from purely the general covariant principle and the homogeneity of spacetime. The Einstein equation is shown to be the gauge equation to guarantee the local symmetry of spacetime translation. Gravity is an apparent force due to the curvature of spacetime resulted from the conservation of energy-momentum. In the action of quantum field theory, only electroweak-strong interactions should be considered with the curved spacetime metric determined by the Einstein equation. (general)

The {sigma}-{omega} model Lagrangian is generalized to an accelerated frame by using the technique of general relativity which is known as tetrad formalism. We apply this model to the description of rotating nuclei within the mean field approximation, which we call General Relativistic Mean Field Theory (GRMFT) for rotating nuclei. The resulting equations of motion coincide with those of Munich group whose formulation was not based on the general relativistic transformation property of the spinor fields. Some numerical results are shown for the yrast states of the Mg isotopes and the superdeformed rotational bands in the A {approx} 60 mass region. (author)

This original and timely monograph describes a unique self-contained excursion that reveals to the readers the roles of two basic cognitive abilities, i.e. intention recognition and arranging commitments, in the evolution of cooperative behavior. This book analyses intention recognition, an important ability that helps agents predict others’ behavior, in its artificial intelligence and evolutionary computational modeling aspects, and proposes a novel intention recognition method. Furthermore, the book presents a new framework for intention-based decision making and illustrates several ways in which an ability to recognize intentions of others can enhance a decision making process. By employing the new intention recognition method and the tools of evolutionary game theory, this book introduces computational models demonstrating that intention recognition promotes the emergence of cooperation within populations of self-regarding agents. Finally, the book describes how commitment provides a pathway to the evol...

It is a short review of today's gauge gravity theories and their relations with Einstein General Relativity. The conceptions of construction of the gauge gravity theories with higher derivatives are analyzed. GR is regarded as the gauge gravity theory corresponding to the choice of G ∞4 as the local gauge symmetry group and the symmetrical tensor of rank two g μν as the field variable. Using the mathematical technique, single for all fundamental interactions (namely variational formalism for infinite Lie groups), we can obtain Einstein's theory as the gauge theory without any changes. All other gauge approaches lead to non-Einstein theories of gravity. But above-mentioned mathematical technique permits us to construct the gauge gravity theory of higher order (for instance SO (3,1)-gravity) so that all vacuum solutions of Einstein equations are the solutions of the SO (3,1)-gravity theory. The structure of equations of SO(3,1)-gravity becomes analogous to Weeler-Misner geometrodynamics one

This paper identifies what seem to have been the five main issues in contention in monetary theory, both historically and in the current era, and discusses the view that J.M. Keynes took on each of them in the Treatise on Money and The GeneralTheory. The key issues in monetary theory are the ontology of money, endogenous versus exogenous money, interest-rate determination, the choice of the monetary policy instrument, and the neutrality versus non-neutrality of money.

We give an example of a generally covariant quasilocal algebra associated with the massive free field. Maximal, two-sided ideals of this algebra are algebraic representatives of external metric fields. In some sense, this algebra may be regarded as a concrete realization of Ekstein's ideas of presymmetry in quantum field theory. Using ideas from our example and from usual algebraic quantum field theory, we discuss a generalized scheme, in which maximal ideals are viewed as algebraic representatives of dynamical equations or Lagrangians. The considered frame is no quantum gravity, but may lead to further insight into the relation between quantum theory and space-time geometry. (orig.)

Full Text Available Progress in science often begins with verbal hypotheses meant to explain why certain biological phenomena exist. An important purpose of mathematical models in evolutionary research, as in many other fields, is to act as “proof-of-concept” tests of the logic in verbal explanations, paralleling the way in which empirical data are used to test hypotheses. Because not all subfields of biology use mathematics for this purpose, misunderstandings of the function of proof-of-concept modeling are common. In the hope of facilitating communication, we discuss the role of proof-of-concept modeling in evolutionary biology.

We derive and explain the key ideas behind a time-dependent formulation of quantum scattering theory, applicable generally to systems with a finite-range scattering potential. The scattering is initiated and probed by plane wave packets, which are localized just outside the range of the potential. The asymptotic limits of conventional scattering theory (initiation in the remote past; detection in the remote future) are not taken. Instead, the differential cross section (DCS) is obtained by projecting the scattered wave packet onto the probe plane wave packets. The projection also yields a time-dependent version of the DCS. Cuts through the wave packet, just as it exits the scattering potential, yield time-dependent and time-independent angular distributions that give a close-up picture of the scattering which complements the DCS. We have previously applied the theory to interpret experimental cross sections of chemical reactions [e.g., S. C. Althorpe, F. Fernandez-Alonso, B. D. Bean, J. D. Ayers, A. E. Pomerantz, R. N. Zare, and E. Wrede, Nature (London) 416, 67 (2002)]. This paper gives the derivation of the theory, and explains its relation to conventional scattering theory. For clarity, the derivation is restricted to spherical-particle scattering, though it may readily be extended to general multichannel systems. We illustrate the theory using a simple application to hard-sphere scattering

Wigner first proposed a perturbation theory as early as 1945 to study fundamental quantities such as the reactivity worths of different materials. The first formulation, CPT, for conventional perturbation theory is based on universal quantum mechanics concepts. Since that early conception, significant contributions have been made to CPT, in particular, Soodak, who rendered a heuristic interpretation of the adjoint function, (referred to as the GPT method for generalized perturbation theory). The author illustrates the GPT methodology in a variety of linear and nonlinear domains encountered in nuclear reactor analysis. The author begins with the familiar linear neutron field and then generalizes the methodology to other linear and nonlinear fields, using heuristic arguments. The author believes that the inherent simplicity and elegance of the heuristic derivation, although intended here for reactor physics problems might be usefully adopted in collateral fields and includes such examples

The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.

Abstract. In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–. Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been stud- ied and some assumptions ...

In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–Sachs space-time. For bulk viscous ﬂuid, both exponential and power-law solutions have been studied and some assumptions among the ...

Contains a study of the basic concepts and principles of variational analysis and generalized differentiation in both finite-dimensional and infinite-dimensional spaces. This title presents many applications to problems in optimization, equilibria, stability and sensitivity, control theory, economics, mechanics, and more.

The project for an experiment which uses the effect of gravitation on Maser-type clocks placed on the ground at two different heights and which is designed to verify the generaltheory of relativity. Reprint of a paper published in Comptes rendus des seances de l'Academie des Sciences, t. 250, p. 299-301, sitting of 11 January 1960 [fr

The possibility of using the generalized perturbation theory to calculate the perturbation of the flux disadvantage factors of reactor cell, resulting from the variation of the cell parameters, is studied. For simplicity the one-group diffusion approximation is considered. All necessary equations are derived for variations both of the cell dimensions. Numerical results are presented in the paper

Much of the prior work on General Strain Theory (GST) has focused on how strain and negative emotions interrelate to produce criminal--especially violent--activity. Very little research has extended GST to examine other types of non-criminal, negative behavior, such as self-harming behaviors associated with disordered eating, a traditionally…

A theory of the thermo-elastic dissipation in vibrating bodies is developed, starting from the three-dimensional thermo-elastic equations. After a discussion of the basic thermodynamical foundations, some general considerations on the problem of the conversion of mechanical energy into heat are

Describes basic concepts in the field of general systems theory (GST) and explains the relationship between instructional systems design (ISD) and GST. Benefits of integrating GST into the curriculum of ISD graduate programs are discussed, and a short bibliography on GST is included. (LRW)

The canonical commutation relations are analyzed in detail in the manifestly covariant quantum field theory of general relativity proposed previously. It is explicitly proved that the BRS charge is indeed the generator of the BRS transformation both in the Landau gauge and in the non-Landau one. The equivalence between the field equations and the Heisenberg equations is confirmed. (author)

We consider models of (d-n)-dimensional membranes fluctuating in a d-dimensional space under the action of surface tension. We investigate the renormalization properties of these models perturbatively and in 1/n expansion. The potential relationships of these models to generalized Z 2 gauge theories are indicated. (orig.)

We present a new generaltheory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

We present a generaltheory of spontaneous emission at exceptional points (EPs)-exotic degeneracies in non-Hermitian systems. Our theory extends beyond spontaneous emission to any light-matter interaction described by the local density of states (e.g., absorption, thermal emission, and nonlinear frequency conversion). Whereas traditional spontaneous-emission theories imply infinite enhancement factors at EPs, we derive finite bounds on the enhancement, proving maximum enhancement of 4 in passive systems with second-order EPs and significantly larger enhancements (exceeding 400×) in gain-aided and higher-order EP systems. In contrast to non-degenerate resonances, which are typically associated with Lorentzian emission curves in systems with low losses, EPs are associated with non-Lorentzian lineshapes, leading to enhancements that scale nonlinearly with the resonance quality factor. Our theory can be applied to dispersive media, with proper normalization of the resonant modes.

This book presents the fundamentals of irreversible thermodynamics for nonlinear transport processes in gases and liquids, as well as for generalized hydrodynamics extending the classical hydrodynamics of Navier, Stokes, Fourier, and Fick. Together with its companion volume on relativistic theories, it provides a comprehensive picture of the kinetic theory formulated from the viewpoint of nonequilibrium ensembles in both nonrelativistic and, in Vol. 2, relativistic contexts. Theories of macroscopic irreversible processes must strictly conform to the thermodynamic laws at every step and in all approximations that enter their derivation from the mechanical principles. Upholding this as the inviolable tenet, the author develops theories of irreversible transport processes in fluids (gases or liquids) on the basis of irreversible kinetic equations satisfying the H theorem. They apply regardless of whether the processes are near to or far removed from equilibrium, or whether they are linear or nonlinear with respe...

The goal of this paper is to present the Kaluza-Klein theory. In the first part we will discuss the theory elaborated by Kaluza and Klein, in a Riemann space with five dimensions, which unifies the gravitation with electromagnetism. The second part debates the generalization of this theory in a space with 4+n dimensions. This is a mathematical product between the Riemann 4-dimension variety and the G/H n-dimensional homogenous space. In the last part we will propose a theory Kaluza-Klein like in the fiber bundle space with 4+n dimensions. Every part is structured as follows: the metric tensor G will be identified for the gravitation and the potentials Yang-Mills; then the equations of geodesics and the equations of the field will be deduced. (author)

The principal purpose of this paper is to present a comprehensive overview of generalized information theory (GIT): a research program whose objective is to develop a broad treatment of uncertainty-based information, not restricted to classical notions of uncertainty. After a brief overview of classical information theories, a broad framework for formalizing uncertainty and the associated uncertainty-based information of a great spectrum of conceivable types is sketched. The various theories of imprecise probabilities that have already been developed within this framework are then surveyed, focusing primarily on some important unifying principles applying to all these theories. This is followed by introducing two higher levels of the theories of imprecise probabilities: (i) the level of measuring the amount of relevant uncertainty (predictive, retrodictive, prescriptive, diagnostic, etc.) in any situation formalizable in each given theory, and (ii) the level of some methodological principles of uncertainty, which are contingent upon the capability to measure uncertainty and the associated uncertainty-based information. Various issues regarding both the measurement of uncertainty and the uncertainty principles are discussed. Again, the focus is on unifying principles applicable to all the theories. Finally, the current status of GIT is assessed and future research in the area is discussed

We rewrite the recently derived cubic action of Double Field Theory on group manifolds http://dx.doi.org/10.1007/JHEP02(2015)001 in terms of a generalized metric and extrapolate it to all orders in the fields. For the resulting action, we derive the field equations and state them in terms of a generalized curvature scalar and a generalized Ricci tensor. Compared to the generalized metric formulation of DFT derived from tori, all these quantities receive additional contributions related to the non-trivial background. It is shown that the action is invariant under its generalized diffeomorphisms and 2D-diffeomorphisms. Imposing additional constraints relating the background and fluctuations around it, the precise relation between the proposed generalized metric formulation of DFT WZW and of original DFT from tori is clarified. Furthermore, we show how to relate DFT WZW of the WZW background with the flux formulation of original DFT.

We rewrite the recently derived cubic action of Double Field Theory on group manifolds http://dx.doi.org/10.1007/JHEP02(2015)001 in terms of a generalized metric and extrapolate it to all orders in the fields. For the resulting action, we derive the field equations and state them in terms of a generalized curvature scalar and a generalized Ricci tensor. Compared to the generalized metric formulation of DFT derived from tori, all these quantities receive additional contributions related to the non-trivial background. It is shown that the action is invariant under its generalized diffeomorphisms and 2D-diffeomorphisms. Imposing additional constraints relating the background and fluctuations around it, the precise relation between the proposed generalized metric formulation of DFT{sub WZW} and of original DFT from tori is clarified. Furthermore, we show how to relate DFT{sub WZW} of the WZW background with the flux formulation of original DFT.

Perturbation theory for changes in linear and bilinear functionals of the forward and adjoint fluxes in a critical reactor has been implemented using two-dimensional discrete ordinates transport theory. The computer program DOT IV was modified to calculate the generalized functions Λ and Λ*. Demonstration calculations were performed for changes in a reaction-rate ratio and a reactivity worth caused by system perturbations. The perturbation theory predictions agreed with direct calculations to within about 2%. A method has been developed for calculating higher lambda eigenvalues and eigenfunctions using techniques similar to those developed for generalized functions. Demonstration calculations have been performed to obtain these eigenfunctions

The system of the spherical-symmetric vacuum equations of the General Relativity Theory is considered. The general solution to a problem representing two classes of line elements with arbitrary functions g 00 and g 22 is obtained. The properties of the found solutions are analyzed.

A one-dimensional ideal gas with negative compressibility described by quasi-Chaplygin equations is discussed. Its reduction to a Laplace equation is shown, and an evolutionary principle for selecting spontaneous solutions is summarized. Three extremely simple spontaneous solutions are obtained along with multidimensional self-similar solutions. The Buneman instability in a plasma is considered as an example. 17 references

Several theories with scalar field can be derived from different variational principles. Here a very general variational principle is considered and it is proved that, in the exterior case without electromagnetic field, the solution for a particular case generates the set of solutions for the general case. This is applied to the exterior solution in the static case with spherical symmetry without electromagnetic field. The predictions are investigated for the classic effects and the event horizons and some limitations for the variational principles which generalize the usual limitations are obtained. In all these cases the Schwarzschild solution with his horizon appears as a very particular case. (author)

A generalization of the neutron diffusion equation is introduced, the solution of which is an accurate approximation to the transport scalar flux. In this generalization the auxiliary transport calculations of the system of interest are utilized to compute an accurate, pointwise diffusion coefficient. A procedure is specified to generate and improve this auxiliary information in a systematic way, leading to improvement in the calculated diffusion scalar flux. This improvement is shown to be contingent upon satisfying the condition of positive calculated-diffusion coefficients, and an algorithm that ensures this positivity is presented. The generalized diffusion theory is also shown to be compatible with conventional diffusion theory in the sense that the same methods and codes can be used to calculate a solution for both. The accuracy of the method compared to reference S/sub N/ transport calculations is demonstrated for a wide variety of examples. (U.S.)

Full Text Available In this work we develop, in a somewhat extensive manner, a geometric theory of chiral elasticity which in general is endowed with geometric discontinuities (sometimes referred to as defects. By itself, the present theorygeneralizes both Cosserat and void elasticity theories to a certain extent via geometrization as well as by taking intoaccount the action of the electromagnetic field, i.e., the incorporation of the electromagnetic field into the description of the so-called microspin (chirality also forms the underlying structure of this work. As we know, the description of the electromagnetic field as a unified phenomenon requires four-dimensional space-time rather than three-dimensional space as its background. For this reason we embed the three-dimensional material space in four-dimensional space-time. This way, the electromagnetic spin is coupled to the non-electromagnetic microspin, both being parts of the completemicrospin to be added to the macrospin in the full description of vorticity. In short, our objective is to generalize the existing continuum theories by especially describing microspin phenomena in a fully geometric way.

Full Text Available In this work we develop, in a somewhat extensive manner, a geometric theory of chiral elasticity which in general is endowed with geometric discontinuities (sometimes re- ferred to as defects . By itself, the present theorygeneralizes both Cosserat and void elasticity theories to a certain extent via geometrization as well as by taking into ac- count the action of the electromagnetic field, i.e., the incorporation of the electromag- netic field into the description of the so-called microspin ( chirality also forms the un- derlying structure of this work. As we know, the description of the electromagnetic field as a unified phenomenon requires four-dimensional space-time rather than three- dimensional space as its background. For this reason we embed the three-dimensional material space in four-dimensional space-time. This way, the electromagnetic spin is coupled to the non-electromagnetic microspin, both being parts of the complete mi- crospin to be added to the macrospin in the full description of vorticity. In short, our objective is to generalize the existing continuum theories by especially describing mi- crospin phenomena in a fully geometric way.

Several important comments on GeneralTheory of Quantized Fields shall be supplemented here. Our theory is based on (Riemannian) momentum spaces with finite volumes. Our theory is formulated in the specific inertial frame, i.e., the rest frame of the cosmic back-ground radiation (RF-CBR). To go to other reference frame, we reply on general co-ordinate (in our case, energy and momentum variables, p-representation) transformations and the principle of general relativity. We find the degeneracy on energy levels of all elementary particles (same values of all particle energies appear twice) (as compared to the conventional field theories). This doubling of energy levels might be important at the beginning (very early stage) of our evolutional universe. However, we may not wish to have such a doubling at the present epoch. We can avoid the doubling by introducing appropriate (natural and rational, of course) Yukawa interactions among fermions and bosons. Then it is easy to realize the situation in which elementary particles populated in the half of the energy levels (called 'our particles' having normal spin multiplicity) shall not 'interact' with particles populated in the other half of energy levels except gravity. The particles in the latter group may be called 'dark matter particles', which give the most natural candidates of dark matter. We have already emphasized that other candidates of dark matter are zero-point vibration energy of all elementary particles and the energy of the vacuum due to interaction Hamiltonians. (author)

We investigate the concept of entropy in probabilistic theories more general than quantum mechanics, with particular reference to the notion of information causality (IC) recently proposed by Pawlowski et al (2009 arXiv:0905.2292). We consider two entropic quantities, which we term measurement and mixing entropy. In the context of classical and quantum theory, these coincide, being given by the Shannon and von Neumann entropies, respectively; in general, however, they are very different. In particular, while measurement entropy is easily seen to be concave, mixing entropy need not be. In fact, as we show, mixing entropy is not concave whenever the state space is a non-simplicial polytope. Thus, the condition that measurement and mixing entropies coincide is a strong constraint on possible theories. We call theories with this property monoentropic. Measurement entropy is subadditive, but not in general strongly subadditive. Equivalently, if we define the mutual information between two systems A and B by the usual formula I(A: B)=H(A)+H(B)-H(AB), where H denotes the measurement entropy and AB is a non-signaling composite of A and B, then it can happen that I(A:BC)< I(A:B). This is relevant to IC in the sense of Pawlowski et al: we show that any monoentropic non-signaling theory in which measurement entropy is strongly subadditive, and also satisfies a version of the Holevo bound, is informationally causal, and on the other hand we observe that Popescu-Rohrlich boxes, which violate IC, also violate strong subadditivity. We also explore the interplay between measurement and mixing entropy and various natural conditions on theories that arise in quantum axiomatics.

A generalized form of Wheeler-Feynman absorber theory is used to explain the quantum-mechanical paradox proposed by Einstein, Podolsky, and Rosen (EPR). The advanced solutions of the electromagnetic wave equation and of relativistic quantum-mechanical wave equations are shown to play the role of ''verifier'' in quantum-mechanical ''transactions,'' providing microscopic communication paths between detectors across spacelike intervals in violation of the EPR locality postulate. The principle of causality is discussed in the context of this approach, and possibilities for experimental tests of the theory are examined

A set of Euler's equations is obtained in the framework of the general relativity theory from the variational equation in the supposition that lagrangian of the material depends on additional (in comparison with classical theories) thermodynamic parameters and taking into account possible irreversible processes. Momentum equations for continuous medium of a thermodynamic closed set are shown to be the consequence of field equations. The problem about the type of energy-momentum material tensor in the presence of derivatives from additional thermodynamic parameters in the number of lagrangian arguments is considered

Some cosmological implications of a general scalar-tensor theory for induced gravity are discussed. The model exhibits a slow-rolling phase provided that the coupling function ε(φ) varies slowly enough such that φ dlnε(φ)/dφ much less than 2 during almost the inflationary epoch. It is then shown that, as in the ordinary induced gravity inflation, the chaotic scenario is more natural than the new scenario which proves to be even not self-consistent. The results are applied, for illustration, to a scalar-tensor theory of the Barker type. (author). 25 refs

The indefinite-metric quantum field theory of general relativity is extended to the coupled system of the gravitational field and a Dirac field on the basis of the vierbein formalism. The six extra degrees of freedom involved in vierbein are made unobservable by introducing an extra subsidiary condition Q sub(s) + phys> = 0, where Q sub(s) denotes a new BRS charge corresponding to the local Lorentz invariance. It is shown that a manifestly covariant, unitary, canonical theory can be constructed consistently on the basis of the vierbein formalism. (author)

Full Text Available In the last few decades, there has been increasing interest in the role of the amygdala in psychiatric disorders and in particular its contribution to the socio-emotional impairments in autism spectrum disorders (ASDs. Given that the amygdala is a component structure of the social brain, several theoretical explanations compatible with amygdala dysfunction have been proposed to account for socio-emotional impairments in ASDs, including abnormal eye contact, gaze monitoring, face processing, mental state understanding and empathy. Nevertheless, many theoretical accounts, based on the Amygdala Theory of Autism, fail to elucidate the complex pattern of impairments observed in this population, which extends beyond the social domain. As posited by the Relevance Detector theory (Sander, Grafman and Zalla, 2003, the human amygdala is a critical component of a brain circuit involved in the appraisal of self-relevant events that include, but are not restricted to, social stimuli. Here, we propose that the behavioral and social-emotional features of ASDs may be better understood in terms of a disruption in a ‘Relevance Detector Network’ affecting the processing of stimuli that are relevant for the organism’s self-regulating functions. In the present review, we will first summarize the main literature supporting the involvement of the amygdala in socio-emotional disturbances in ASDs. Next, we will present a revised version of the amygdala Relevance Detector hypothesis and we will show that this theoretical framework can provide a better understanding of the heterogeneity of the impairments and symptomatology of ASDs. Finally, we will discuss some predictions of our model, and suggest new directions in the investigation of the role of the amygdala within the more generally disrupted cortical connectivity framework as a model of neural organization of the autistic brain.

Full Text Available Progressive kidney disease follows nephron loss, hyperfiltration, and incomplete repair, a process described as “maladaptive.” In the past 20 years, a new discipline has emerged that expands research horizons: evolutionary medicine. In contrast to physiologic (homeostatic adaptation, evolutionary adaptation is the result of reproductive success that reflects natural selection. Evolutionary explanations for physiologically maladaptive responses can emerge from mismatch of the phenotype with environment or from evolutionary tradeoffs. Evolutionary adaptation to a terrestrial environment resulted in a vulnerable energy-consuming renal tubule and a hypoxic, hyperosmolar microenvironment. Natural selection favors successful energy investment strategy: energy is allocated to maintenance of nephron integrity through reproductive years, but this declines with increasing senescence after ∼40 years of age. Risk factors for chronic kidney disease include restricted fetal growth or preterm birth (life history tradeoff resulting in fewer nephrons, evolutionary selection for APOL1 mutations (which provide resistance to trypanosome infection, a tradeoff, and modern life experience (Western diet mismatch leading to diabetes and hypertension. Current advances in genomics, epigenetics, and developmental biology have revealed proximate causes of kidney disease, but attempts to slow kidney disease remain elusive. Evolutionary medicine provides a complementary approach by addressing ultimate causes of kidney disease. Marked variation in nephron number at birth, nephron heterogeneity, and changing susceptibility to kidney injury throughout the life history are the result of evolutionary processes. Combined application of molecular genetics, evolutionary developmental biology (evo-devo, developmental programming, and life history theory may yield new strategies for prevention and treatment of chronic kidney disease.

A canonical quantization method is given for systems with first and second class constraints of arbitrary rank. The effectiveness of the method is demonstrated using sample Yang-Mills and gravitational fields. A correct expression is derived for the S-matrix of theories that are momentum-quadratic within the scope of canonical gauges, including ghost fields. Generalized quantization is performed and the S-matrix is derived in configurational space for theories of relativistic membranes representing a generalization of theories of strings to the case of an extended spatial implementation. It is demonstrated that the theory of membranes in n+l-dimensional space is a system with rank-n constraints

Utility and risk are two often competing measurements on the investment success. We show that efficient trade-off between these two measurements for investment portfolios happens, in general, on a convex curve in the two dimensional space of utility and risk. This is a rather general pattern. The modern portfolio theory of Markowitz [H. Markowitz, Portfolio Selection, 1959] and its natural generalization, the capital market pricing model, [W. F. Sharpe, Mutual fund performance , 1966] are spe...

Full Text Available On one standard philosophical position adopted by evolutionary naturalists, human ethical systems are nothing more than evolutionary adaptations that facilitate social behavior. Belief in an absolute moral foundation is therefore in error. But evolutionary naturalism, by its commitment to the basic valutional concept of fitness, reveals another, logical error: standard conceptions of value in terms of simple predication and properties are mistaken. Valuation has instead, a relational structure that makes reference to respects, subjects and environments. This relational nature is illustrated by the analogy commonly drawn between value and color. Color perception, as recognized by the ecological concept, is relational and dependent on subject and environment. In a similar way, value is relational and dependent on subject and environment. This makes value subjective, but also objective in that it is grounded on facts about mattering. At bottom, values are complex relational facts. The view presented here, unlike other prominent relational and naturalistic conceptions of value, recognizes the full range of valuation in nature. The advantages of this relational conception are first, that it gets valuation right; second, it provides a framework to better explain and understand valuation in all its varieties and patterns.

Full Text Available On one standard philosophical position adopted by evolutionary naturalists, human ethical systems are nothing more than evolutionary adaptations that facilitate social behavior. Belief in an absolute moral foundation is therefore in error. But evolutionary naturalism, by its commitment to the basic valutional concept of fitness, reveals another, logical error: standard conceptions of value in terms of simple predication and properties are mistaken. Valuation has instead, a relational structure that makes reference to respects, subjects and environments. This relational nature is illustrated by the analogy commonly drawn between value and color. Color perception, as recognized by the ecological concept, is relational and dependent on subject and environment. In a similar way, value is relational and dependent on subject and environment. This makes value subjective, but also objective in that it is grounded on facts about mattering. At bottom, values are complex relational facts. The view presented here, unlike other prominent relational and naturalistic conceptions of value, recognizes the full range of valuation in nature. The advantages of this relational conception are first, that it gets valuation right; second, it provides a framework to better explain and understand valuation in all its varieties and patterns.

This study is conducted to examine the validity of the generalized second law of thermodynamics (GSLT) in flat FRW for modified teleparallel gravity involving coupling between a scalar field with the torsion scalar T and the boundary term B = 2∇{sub μ}T{sup μ}. This theory is very useful, since it can reproduce other important well-known scalar field theories in suitable limits. The validity of the first and second law of thermodynamics at the apparent horizon is discussed for any coupling. As examples, we have also explored the validity of those thermodynamics laws in some new cosmological solutions under the theory. Additionally, we have also considered the logarithmic entropy corrected relation and discuss the GSLT at the apparent horizon. (orig.)

This study is conducted to examine the validity of the generalized second law of thermodynamics (GSLT) in flat FRW for modified teleparallel gravity involving coupling between a scalar field with the torsion scalar T and the boundary term B = 2∇ μ T μ . This theory is very useful, since it can reproduce other important well-known scalar field theories in suitable limits. The validity of the first and second law of thermodynamics at the apparent horizon is discussed for any coupling. As examples, we have also explored the validity of those thermodynamics laws in some new cosmological solutions under the theory. Additionally, we have also considered the logarithmic entropy corrected relation and discuss the GSLT at the apparent horizon. (orig.)

Recent efforts to uncover the neural underpinnings of emotional experiences have provided a foundation for novel neurophysiological theories of emotions, adding to the existing body of psychophysiological, motivational, and evolutionarytheories. Besides explicitly modeling human-specific emotions and considering the interactions between emotions and language, Koelsch et al.'s original contribution to this challenging endeavor is to identify four brain areas as distinct "affect systems" which differ in terms of emotional qualia and evolutionary pathways [1]. Here, we comment on some features of this promising Quartet Theory of Emotions, focusing particularly on evolutionary and biological aspects related to the four affect systems and their relation to prevailing emotion theories, as well as on the role of music-induced emotions.

Evolutionarytheory predicts that differential reproductive effort and rate of reproductive senescence will evolve under different rates of external mortality. We examine the evolutionary divergence of age-specific reproduction in two life-history ecotypes of the western terrestrial garter snake, Thamnophis elegans. We test for the signature of reproductive senescence (decreasing fecundity with age) and increasing reproductive effort with age (increasing reproductive productivity per gram female) in replicate populations of two life-history ecotypes: snakes that grow fast, mature young and have shorter lifespans, and snakes that grow slow, mature late and have long lives. The difference between life-history ecotypes is due to genetic divergence in growth rate. We find (i) reproductive success (live litter mass) increases with age in both ecotypes, but does so more rapidly in the fast-growth ecotype, (ii) reproductive failure increases with age in both ecotypes, but the proportion of reproductive failure to total reproductive output remains invariant, and (iii) reproductive effort remains constant in fast-growth individuals with age, but declines in slow-growth individuals. This illustration of increasing fecundity with age, even at the latest ages, deviates from standard expectations for reproductive senescence, as does the lack of increases in reproductive effort. We discuss our findings in light of recent theories regarding the phenomenon of increased reproduction throughout life in organisms with indeterminate growth and its potential to offset theoretical expectations for the ubiquity of senescence.

Objective: In this article, I want to promote theoretical awareness and commitment among qualitative researchers in general practice and suggest adequate and feasible theoretical approaches. Approach: I discuss different theoretical aspects of qualitative research and present the basic foundations...... theory is a consistent and soundly based set of assumptions about a specific aspect of the world, predicting or explaining a phenomenon. Qualitative research is situated in an interpretative paradigm where notions about particular human experiences in context are recognized from different subject...... in qualitative analysis are presented, emphasizing substantive theories to sharpen the interpretative focus. Such approaches are clearly within reach for a general practice researcher contributing to clinical practice by doing more than summarizing what the participants talked about, without trying to become...

Thermodynamics, which describes vast systems, has been reconciled with small scales, relevant to single-molecule experiments, in resource theories. Resource theories have been used to model exchanges of energy and information. Recently, particle exchanges were modeled; and an umbrella family of thermodynamic resource theories was proposed to model diverse baths, interactions, and free energies. This paper motivates and details the family’s structure and prospective applications. How to model electrochemical, gravitational, magnetic, and other thermodynamic systems is explained. Szilárd’s engine and Landauer’s Principle are generalized, as resourcefulness is shown to be convertible not only between information and gravitational energy, but also among diverse degrees of freedom. Extensive variables are associated with quantum operators that might fail to commute, introducing extra nonclassicality into thermodynamic resource theories. An early version of this paper partially motivated the later development of noncommutative thermalization. This generalization expands the theories’ potential for modeling realistic systems with which small-scale statistical mechanics might be tested experimentally.

Evolution as an idea has a lengthy history, even though the idea of evolution is generally associated with Darwin today. Rebecca Stott provides an engaging and thoughtful overview of this history of evolutionary thinking in her 2013 book, Darwin's Ghosts: The Secret History of Evolution. Since Darwin, the debate over evolution—both how it takes place and, in a long war of words with religiously-oriented thinkers, whether it takes place—has been sustained and heated. A growing share of this debate is now devoted to examining how evolutionary thinking affects areas outside of biology. How do our lives change when we recognize that all is in flux? What can we learn about life more generally if we study change instead of stasis? Carter Phipps’ book, Evolutionaries: Unlocking the Spiritual and Cultural Potential of Science's Greatest Idea, delves deep into this relatively new development. Phipps generally takes as a given the validity of the Modern Synthesis of evolutionary biology. His story takes us into, as the subtitle suggests, the spiritual and cultural implications of evolutionary thinking. Can religion and evolution be reconciled? Can evolutionary thinking lead to a new type of spirituality? Is our culture already being changed in ways that we don't realize by evolutionary thinking? These are all important questions and Phipps book is a great introduction to this discussion. Phipps is an author, journalist, and contributor to the emerging “integral” or “evolutionary” cultural movement that combines the insights of Integral Philosophy, evolutionary science, developmental psychology, and the social sciences. He has served as the Executive Editor of EnlightenNext magazine (no longer published) and more recently is the co-founder of the Institute for Cultural Evolution, a public policy think tank addressing the cultural roots of America's political challenges. What follows is an email interview with Phipps. PMID:26478766

The proper framework for testing Rastall's theory and its generalizations is in the case of non-negligible (i.e. discernible) gravitational effects such as gravity gradients. These theories have conserved integral four-momentum and angular momentum. The Nordtvedt effect then provides limits on the parameters which arise as the result of the non-zero divergence of the energy-momentum tensor.

A general methodology for optimization of assembly shuffling and burnable poison (BP) loadings for LWR reload design has been developed. The uniqueness of this approach lies in the coupling of Generalized Perturbation Theory (GPT) methods and standard Integer Programming (IP) techniques. An IP algorithm can simulate the discrete nature of the fuel shuffling and BP loading problems, and the use of GPT sensitivity data provides an efficient means for modeling the behavior of the important core performance parameters. The method is extremely flexible since the choice of objective function and the number and mix of constraints depend only on the ability of GPT to determine the appropriate sensitivity functions

In section 2 we describe more carefully the generalized Poisson processes, giving a realization of the underlying probability space, and we characterize these processes by their characteristic functionals. Section 3 is devoted to the proof of the previous formula for quantum mechanical systems, with possibly velocity dependent potentials and in section 4 we give an application of the previous theory to some relativistic Bose field models. (orig.)

We derive general and explicit expressions for the unrenormalized and renormalized dressed propagators of fermions in parity-nonconserving theories with inter-generation mixing. The mass eigenvalues, the corresponding mass counterterms, and the effect of inter-generation mixing on their determination are discussed. Invoking the Aoki-Hioki-Kawabe-Konuma-Muta renormalization conditions and employing a number of very useful relations from Matrix Algebra, we show explicitly that the renormalized dressed propagators satisfy important physical properties. (orig.)

The general magnetostatic equilibrium problem for the geomagnetic tail is reduced to the solution of ordinary differential equations and ordinary integrals. The theory allows the integration of the self-consistent magnetotail equilibrium field from the knowledge of four functions of two space variables: the neutral sheet location, the total pressure, the magnetic field strength, and the z component of the magnetic field at the neutral sheet.

We study scattering amplitudes in two-dimensional string theory on a black hole bakground. We start with a simple derivation of the Fateev-Zamolodchikov-Zamolodchikov (FZZ) duality, which associates correlation functions of the sine-Liouville integrable model on the Riemann sphere to tree-level string amplitudes on the Euclidean two-dimensional black hole. This derivation of FZZ duality is based on perturbation theory, and it relies on a trick originally due to Fateev, which involves duality relations between different Selberg type integrals. This enables us to rewrite the correlation functions of sine-Liouville theory in terms of a special set of correlators in the gauged Wess-Zumino-Witten (WZW) theory, and use this to perform further consistency checks of the recently conjectured Generalized FZZ (GFZZ) duality. In particular, we prove that n-point correlation functions in sine-Liouville theory involving n−2 winding modes actually coincide with the correlation functions in the SL(2,ℝ)/U(1) gauged WZW model that include n−2 oscillator operators of the type described by Giveon, Itzhaki and Kutasov in reference https://www.doi.org/10.1007/JHEP10(2016)157. This proves the GFZZ duality for the case of tree level maximally winding violating n-point amplitudes with arbitrary n. We also comment on the connection between GFZZ and other marginal deformations previously considered in the literature.

Full Text Available A discussion of the functional setting customarily adopted in General Relativity (GR is proposed. This is based on the introduction of the notion of nonlocal point transformations (NLPTs. While allowing the extension of the traditional concept of GR-reference frame, NLPTs are important because they permit the explicit determination of the map between intrinsically different and generally curved space-times expressed in arbitrary coordinate systems. For this purpose in the paper the mathematical foundations of NLPT-theory are laid down and basic physical implications are considered. In particular, explicit applications of the theory are proposed, which concern (1 a solution to the so-called Einstein teleparallel problem in the framework of NLPT-theory; (2 the determination of the tensor transformation laws holding for the acceleration 4-tensor with respect to the group of NLPTs and the identification of NLPT-acceleration effects, namely, the relationship established via general NLPT between particle 4-acceleration tensors existing in different curved space-times; (3 the construction of the nonlocal transformation law connecting different diagonal metric tensors solution to the Einstein field equations; and (4 the diagonalization of nondiagonal metric tensors.

A growing number of studies indicate the ubiquity of school bullying: It is a global concern, regardless of cultural differences. Little previous research has examined whether leading criminological theories can explain bullying, despite the commonality between bullying and delinquency. The current investigation uses longitudinal data on 655…

It is shown that while the predictions of relativistic theory of gravitation (RTG) for the gravitational effects are unique and consistent with the experimental data available, the relevant predictions of general relativity theory are not unique. Therewith the above nonuniqueness manifests itself in some effects in the first order in the gravitational interaction constant in others in the second one. The absence in GRT of the energy-momentum and angular momentum conservation laws for the matter and gravitational field taken together and its inapplicability to give uniquely determined predictions for the gravitational phenomena compel to reject GRT as a physical theory

In this paper we present a chaos-based evolutionary algorithm (EA) for solving nonlinear programming problems named chaotic genetic algorithm (CGA). CGA integrates genetic algorithm (GA) and chaotic local search (CLS) strategy to accelerate the optimum seeking operation and to speed the convergence to the global solution. The integration of global search represented in genetic algorithm and CLS procedures should offer the advantages of both optimization methods while offsetting their disadvantages. By this way, it is intended to enhance the global convergence and to prevent to stick on a local solution. The inherent characteristics of chaos can enhance optimization algorithms by enabling it to escape from local solutions and increase the convergence to reach to the global solution. Twelve chaotic maps have been analyzed in the proposed approach. The simulation results using the set of CEC’2005 show that the application of chaotic mapping may be an effective strategy to improve the performances of EAs.

I believe that people will not feel comfortable and positive about the contemporary world until we can endorse and believe an evolutionary cosmology which is appropriate to modern conditions. A cosmology is a mythical account of the universe as it presents itself to the human mind; it needs to be poetic, symbolic, inspiring of a sense of awe and mystery. Furthermore, a complete cosmology should include the three levels of macro-, meso- and micro-cosm, in order to understand the nature of the universe, human society, and the individual's relation to them. Traditional cosmologies described an eternal underlying structure to ultimate reality--a static ideal state towards which the world ought to gravitate. However, modern life is characterized by rapid growth, novelty, destruction and fluidity of all kinds of structures, a feature which traditional static cosmologies interpret negatively and pessimistically. A modern cosmology therefore needs to be focused on underlying dynamic process instead of structure and stasis. Biologists are better placed than many to appreciate a cosmology based on evolutionary change; because this is the mainstream understanding of adaptation and diversity in the natural world. The same dynamic, neophiliac and open-ended process of 'creative destruction' can be seen at work in science, economics, and modern spirituality. But a modern cosmology will only be experienced as both deep and spontaneous when it takes the form of a mythic account that is first encountered and assimilated during childhood. Since myths arise as a consequence of human creativity; there is a vital future mythogenic role for artists in the realm of ideas, images and stories: people such as mystics, poets and philosophers--including, I hope and expect, creatively inspired scientists.

One ongoing challenge to socio-hydrology is the problem of generalization: to what extent do common human-water co-evolutions exist across distinct cases and what are underlying mechanisms of these co-evolutions. This problem stems in part from a lack of unifying theories in socio-hydrology, which hinders the explanation and generalization of results between cases in different regions. Theories help an analyst to make assumptions that are necessary to diagnose a specific phenomenon, to explain the general mechanisms of causation, and, thus, to predict future outcomes. To help address the issue, this study introduces two theories that are increasingly used in the fields of sustainability science and social-ecological systems research: robustness-fragility tradeoff (RFTO) and cultural multi-level selection (CMLS). We apply each of these theories to two distinct cases (water management issues in southwest Bangladesh and the Kissimmee River Basin, Florida) and interpret the phenomena of the levee and adaptation effects. CMLS and RFTO focus on complementary aspects of socio-hydrological phenomena. The theory of RFTO, which is mostly about inherent tradeoffs associated with infrastructure improvements, explains how efforts to increase system robustness can generate hidden endogenous risks. CMLS theory, rooted in the broader theory of cultural evolution, concerns how human cultural dynamics can act as an endogenous driver of system change across multiple levels of social organizations. Using the applied examples, we demonstrate that these two theories can provide an effective way to study social-hydrological systems and to overcome the generalization problem. Our work shows that multiple theories can be synthesized to give a richer understanding of diverse socio-hydrological patterns.

This document, one of a series of reports examining the possible contribution of other disciplines to evaluation methodology, describes the major elements of general systems theory (GST), cybernetics theory (CT) and management control theory (MCT). The author suggests that MCT encapsulates major concerns of evaluation since it reveals that…

Full Text Available In recent years, supply chain management is known as the key factor for achieving competitive advantage. Better customer service, revenue improvement and cost reduction are the results of this philosophy. Organizations can manage the performance of their firms by appropriate goal setting, identifying criteria and continuous performance measurement, which creates a good view for the business circumstances. Developing and defining appropriate indicators at different levels of chain is necessary for implementing a performance measurement system. In this study, we propose a new method to determine the measurement indicators and strategies of the company in term of balanced scorecard. The study is a combination of balanced scorecard, path analysis, evolutionary game theory and cooperative game theory for strategic planning. The study offers an appropriate program for future activities of organizations and determines the present status of the firm. The implementation of the proposed method is introduced for a food producer and the results are analyzed.

The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.

Generalized perturbation theory (GPT) in neutron transport is a means to evaluate eigenvalue and reaction rate variations due to small changes in the reactor properties (macroscopic cross sections). These variations can be decomposed in two terms: a direct term corresponding to the changes in the cross section themselves and an indirect term that takes into account the perturbations in the neutron flux. As we will show, taking into account the indirect term using a GPT method is generally straight forward since this term is the scalar product of the unperturbed generalized adjoint with the product of the variation of the transport operator and the unperturbed flux. In the case where the collision probability (CP) method is used to solve the transport equation, evaluating the perturbed transport operator involves calculating the variations in the CP matrix for each change in the reactor properties. Because most of the computational effort is dedicated to the CP matrix calculation the gains expected form the GPT method would therefore be annihilated. Here we will present a technique to approximate the variations in the CP matrices thereby replacing the variations in the transport operator with source term variations. We will show that this approximation yields errors fully compatible with the standard generalized perturbation theory errors. Results for 2D CANDU cell calculations will be presented. (author)

V792 Her is an eclipsing RS CVn binary with an orbital period of 27.54 days whose components have spectral types of K0 III and F2 IV. New spectroscopic observations combined with existing photometry have resulted in masses of 1.47 + or - 0.003 solar mass and 1.41 + or - 0.003 solar mass for the K giant and F star, respectively. Additional fundamental parameters are derived. Standard evolutionary models were specifically computed by VandenBerg (1990) for the two stars. The best fit occurs if the components are somewhat metal poor with Fe/H/ = - 0.46. Ages of about 2.3 x 10 to the 9th yr derived for the two components differ by less than 3 percent. Thus, standard evolutionary models with no convective overshoot are able to fit the observed parameters of stars as massive as 1.45 solar mass. However, a definitive comparison is not yet possible since the metal abundance of the stars is unknown and metal-poor convective-overshoot tracks in this mass range are needed. 35 refs

Chemotherapy for metastatic cancer commonly fails due to evolution of drug resistance in tumor cells. Here, we view cancer treatment as a game in which the oncologists choose a therapy and tumors ‘choose’ an adaptive strategy. We propose the oncologist can gain an upper hand in the game by choosing treatment strategies that anticipate the adaptations of the tumor. In particular, we examine the potential benefit of exploiting evolutionary tradeoffs in tumor adaptations to therapy. We analyze a math model where cancer cells face tradeoffs in allocation of resistance to two drugs. The tumor ‘chooses’ its strategy by natural selection and the oncologist chooses her strategy by solving a control problem. We find that when tumor cells perform best by investing resources to maximize response to one drug the optimal therapy is a time-invariant delivery of both drugs simultaneously. However, if cancer cells perform better using a generalist strategy allowing resistance to both drugs simultaneously, then the optimal protocol is a time varying solution in which the two drug concentrations negatively covary. However, drug interactions can significantly alter these results. We conclude that knowledge of both evolutionary tradeoffs and drug interactions is crucial in planning optimal chemotherapy schedules for individual patients. (paper)

Chemotherapy for metastatic cancer commonly fails due to evolution of drug resistance in tumor cells. Here, we view cancer treatment as a game in which the oncologists choose a therapy and tumors ‘choose’ an adaptive strategy. We propose the oncologist can gain an upper hand in the game by choosing treatment strategies that anticipate the adaptations of the tumor. In particular, we examine the potential benefit of exploiting evolutionary tradeoffs in tumor adaptations to therapy. We analyze a math model where cancer cells face tradeoffs in allocation of resistance to two drugs. The tumor ‘chooses’ its strategy by natural selection and the oncologist chooses her strategy by solving a control problem. We find that when tumor cells perform best by investing resources to maximize response to one drug the optimal therapy is a time-invariant delivery of both drugs simultaneously. However, if cancer cells perform better using a generalist strategy allowing resistance to both drugs simultaneously, then the optimal protocol is a time varying solution in which the two drug concentrations negatively covary. However, drug interactions can significantly alter these results. We conclude that knowledge of both evolutionary tradeoffs and drug interactions is crucial in planning optimal chemotherapy schedules for individual patients.

In Chapter I, after an introduction to theories of gravity alternative to general relativity, metric theories, and the post-Newtonian parameterized (PNN) formalism, a new class of metric theories of gravity is defined. As a result the post-Newtonian approximation of the new theories is not described by the PPN formalism. In fact under the weak field and slow motion hypothesis, the post-Newtonian expression of the metric tensor contains an infinite set of new terms and correspondingly an infinite set of new PPN parameters. Chapter II, III, and IV are devoted to new experiments to test general relativity and other metric theories of gravity. In particular, in chapter IV, it is shown that two general relativistics effects, the Lense-Thirring and De Sitter-Fokker precessions of the nodal lines of an Earth artificial satellite are today detectable using high altitude laser ranged artificial satellites such as Lageos. The orbit of this satellite is known with unprecedented accuracy. The author then describes a method of measuring these relativistic precessions using Lageos together with another high altitude laser ranged similar satellite with appropriately chosen orbital parameters

Several theories propose alternative explanations for drug addiction. We propose a generaltheory of transition to addiction that synthesizes knowledge generated in the field of addiction into a unitary explanatory frame. Transition to addiction results from a sequential three-step interaction between: (1) individual vulnerability; (2) degree/amount of drug exposure. The first step, sporadic recreational drug use is a learning process mediated by overactivation of neurobiological substrates of natural rewards that allows most individuals to perceive drugs as highly rewarding stimuli. The second, intensified, sustained, escalated drug use occurs in some vulnerable individuals who have a hyperactive dopaminergic system and impaired prefrontal cortex function. Sustained and prolonged drug use induces incentive sensitization and an allostatic state that makes drugs strongly wanted and needed. Habit formation can also contribute to stabilizing sustained drug use. The last step, loss of control of drug intake and full addiction, is due to a second vulnerable phenotype. This loss-of-control-prone phenotype is triggered by long-term drug exposure and characterized by long-lasting loss of synaptic plasticity in reward areas in the brain that induce a form of behavioral crystallization resulting in loss of control of drug intake. Because of behavioral crystallization, drugs are now not only wanted and needed but also pathologically mourned when absent. This generaltheory demonstrates that drug addiction is a true psychiatric disease caused by a three-step interaction between vulnerable individuals and amount/duration of drug exposure.

People can use a variety of different strategies to perform tasks and these strategies all have two characteristics in common. First, they can be evaluated in comparison with either an absolute or a relative standard. Second, they can be used at varying levels of consistency. In the present article, the authors develop a generaltheory of task…

As the need for intercultural communication in the field of law has increased, the foundation of a generaltheory of bilingual legal lexicography must be given priority. This paper introduces, describes and explains the elements necessary for compiling the optimal bilingual law dictionary....... The theory deals with much more than the traditional question of equivalence, and shows which considerations are necessary to fully exploit the potential of printed dictionaries for the benefit of the users. Most users need linguistic and factual information that must be organised and presented...... in a structured way. This includes user research, organisation of dictionary chapters, and the presentation and structure of the linguistic and factual information in the articles and elsewhere in the dictionary....

Full Text Available The thermal conductivity of particulate nanocomposites is strongly dependent on the size, shape, orientation and dispersion uniformity of the inclusions. To correctly estimate the effective thermal conductivity of the nanocomposite, all these factors should be included in the prediction model. In this paper, the formulation of a generalized effective medium theory for the determination of the effective thermal conductivity of particulate nanocomposites with multiple inclusions is presented. The formulated methodology takes into account all the factors mentioned above and can be used to model nanocomposites with multiple inclusions that are randomly oriented or aligned in a particular direction. The effect of inclusion dispersion non-uniformity is modeled using a two-scale approach. The applications of the formulated effective medium theory are demonstrated using previously published experimental and numerical results for several particulate nanocomposites.

The present booklet shall mediate to such an as possible exact view in relativity theory, who are especially interested for the theory from a generally scientific, philosophical, point of view, without mastering the mathematical apparatus. The lecture presupposes some maturity knowledge and - in spite of the shortness of the booklet - quite much perseverance and strength of mind. The author has token very much efforts in order to present the main thoughts as distinctly and simply as possible, in the whole in such a sequence and in such connection, as it has really been arose. With the aim of distinctiveness it seemed to me unavoidable to repeat myself frequently without paying the smallest regard to the elegance of the presentation; I maintained conscientiously the prescription of the ingenious theoretician L. Boltzmann, elegance should by the object of the taylors ans shoemakers [de

A generaltheory for free-floating ball lightning is presented which unifies the phantom plasma ball theory involving the production of very little light, with theories for ball lightning involving light output produced by burning particles from the soil. The mechanism for the formation of plasma balls is shown to be quite general, producing very similar plasma balls independent of initial ion densities over four orders of magnitude. All that is required is an excess of positive ions in the initial ball of ions. The central plasma density after 1 s is shown to be the reciprocal of the ion neutralization coefficient for all cases, both analytically and computationally. Further, the plasma region has zero electric field in all cases. Surrounding the plasma ball is a sphere of positive ions moving away from the centre via their own space-charge field; this space-charge field, which is the same in all cases near the plasma ball, drives negative ions and negative particles towards the plasma centre. The connection with burning particle theories is the proposition that the burning particles are highly-charged which is very likely after a lightning strike. Burning negatively charged particles would be driven into the plasma ball region and trapped while any positively charged particles would be driven away. The plasma ball structure is shown to last more than 10 s and the ‘burnout time’ for a typical coal particle (as an example) has been measured at 5-10 s this is comparable with the lifetimes observed for ball lightning. The light output from a few hundred particles is estimated to be ~1 W, a typical output for ball lightning. Finally, suggestions are made for the generation of ball lightning in the laboratory.

Despite the well-established finding that American Indian adolescents are at a greater risk of illicit substance use and abuse than the general population, few generalist explanations of deviance have been extended to American Indian substance use. Using a popular generalist explanation of deviance, General Strain Theory, we explore the predictive utility of this model with a subsample of American Indian adolescents from waves one and two of the National Longitudinal Study of Adolescent Health (Add-Health). Overall, we find mixed support for the utility of General Strain Theory to account for American Indian adolescent substance use. While exposure to recent life events, a common measure of stress exposure, was found to be a robust indicator of substance use, we found mixed support for the thesis that negative affect plays a key role in mediating the link between strain and substance use. However, we did find evidence that personal and social resources serve to condition the link between stress exposure and substance use, with parental control, self-restraint, religiosity, and exposure to substance using peers each serving to moderate the association between strain and substance use, albeit in more complex ways than expected.

Generalized probabilistic theories (GPT) provide a general framework that includes classical and quantum theories. It is described by a cone C and its dual C*. We show that whether some one-way communication complexity problems can be solved within a GPT is equivalent to the recently introduced cone factorization of the corresponding communication matrix M. We also prove an analogue of Holevo's theorem: when the cone C is contained in {{{R}}n}, the classical capacity of the channel realized by sending GPT states and measuring them is bounded by log n. Polytopes and optimising functions over polytopes arise in many areas of discrete mathematics. A conic extension of a polytope is the intersection of a cone C with an affine subspace whose projection onto the original space yields the desired polytope. Extensions of polytopes can sometimes be much simpler geometric objects than the polytope itself. The existence of a conic extension of a polytope is equivalent to that of a cone factorization of the slack matrix of the polytope, on the same cone. We show that all 0/1 polytopes whose vertices can be recognized by a polynomial size circuit, which includes as a special case the travelling salesman polytope and many other polytopes from combinatorial optimization, have small conic extension complexity when the cone is the completely positive cone. Using recent exponential lower bounds on the linear extension complexity of polytopes, this provides an exponential gap between the communication complexity of GPT based on the completely positive cone and classical communication complexity, and a conjectured exponential gap with quantum communication complexity. Our work thus relates the communication complexity of generalizations of quantum theory to questions of mainstream interest in the area of combinatorial optimization.

Generalized probabilistic theories (GPT) provide a general framework that includes classical and quantum theories. It is described by a cone C and its dual C*. We show that whether some one-way communication complexity problems can be solved within a GPT is equivalent to the recently introduced cone factorization of the corresponding communication matrix M. We also prove an analogue of Holevo's theorem: when the cone C is contained in R n , the classical capacity of the channel realized by sending GPT states and measuring them is bounded by logn. Polytopes and optimising functions over polytopes arise in many areas of discrete mathematics. A conic extension of a polytope is the intersection of a cone C with an affine subspace whose projection onto the original space yields the desired polytope. Extensions of polytopes can sometimes be much simpler geometric objects than the polytope itself. The existence of a conic extension of a polytope is equivalent to that of a cone factorization of the slack matrix of the polytope, on the same cone. We show that all 0/1 polytopes whose vertices can be recognized by a polynomial size circuit, which includes as a special case the travelling salesman polytope and many other polytopes from combinatorial optimization, have small conic extension complexity when the cone is the completely positive cone. Using recent exponential lower bounds on the linear extension complexity of polytopes, this provides an exponential gap between the communication complexity of GPT based on the completely positive cone and classical communication complexity, and a conjectured exponential gap with quantum communication complexity. Our work thus relates the communication complexity of generalizations of quantum theory to questions of mainstream interest in the area of combinatorial optimization. (paper)

The nonextensive kinetic theory for degenerate quantum gases is discussed in the general relativistic framework. By incorporating nonadditive modifications in the collisional term of the relativistic Boltzmann equation and entropy current, it is shown that Tsallis entropic framework satisfies a H-theorem in the presence of gravitational fields. Consistency with the 2nd law of thermodynamics is obtained only whether the entropic q-parameter lies in the interval q ∈ [ 0 , 2 ] . As occurs in the absence of gravitational fields, it is also proved that the local collisional equilibrium is described by the extended Bose-Einstein (Fermi-Dirac) q-distributions.

The cosmological models resulting from a general scalar-tensor theory of gravity are discussed. Those models for which the scalar field varies as a power of the cosmological expansion factor (i.e. phi varies as Rsup(n)) are considered in detail, leading to a set of such models compatible with observation. This set includes models in which the scalar coupling parameter ω is negative. The models described here are similar to those of Newtonian cosmology obtained from an impotence principle. (author)

This thesis provides a general method to compute all first order corrections to the renormalization group equations. This requires the computation of the first perturbative corrections to the renormalization group β-functions. These corrections are described by Feynman diagrams with two loops. The two-loop renormalization is treated for an arbitrary renormalization field theory. Two cases are considered: 1. the Yukawa sector; 2. the gauge coupling and the scalar potential. In a final section, the breakdown of unitarity in the dimensional reduction scheme is discussed. (Auth.)

A generaltheory for spatio-temporal intensity correlations measurements for a scattered beam is developed. A completely quantum mechanical description for both excitation and detection set up is used. This description is essentially valid for weak incident light beams and single photon absorption processes. From a unified point of view both, stationary as well as, time resolved experiments are described. The interest for such experiments in the study of processes like resonance raman scattering and resonance fluorescence is emphasized. Also an observable coherent contribution associated to different final levels of the target-atoms or molecules is obtained a result which cannot be reached by intensity measurements

Institutions are hard to define and hard to study. Long prominent in political science have been two theories: Rational Choice Institutionalism (RCI) and Historical Institutionalism (HI). Arising from the life sciences is now a third: Evolutionary Institutionalism (EI). Comparative strengths and weaknesses of these three theories warrant review, and the value-to-be-added by expanding the third beyond Darwinian evolutionarytheory deserves consideration. Should evolutionary institutionalism expand to accommodate new understanding in ecology, such as might apply to the emergence of stability, and in genetics, such as might apply to political behavior? Core arguments are reviewed for each theory with more detailed exposition of the third, EI. Particular attention is paid to EI's gene-institution analogy; to variation, selection, and retention of institutional traits; to endogeneity and exogeneity; to agency and structure; and to ecosystem effects, institutional stability, and empirical limitations in behavioral genetics. RCI, HI, and EI are distinct but complementary. Institutional change, while amenable to rational-choice analysis and, retrospectively, to criticaljuncture and path-dependency analysis, is also, and importantly, ecological. Stability, like change, is an emergent property of institutions, which tend to stabilize after change in a manner analogous to allopatric speciation. EI is more than metaphorically biological in that institutional behaviors are driven by human behaviors whose evolution long preceded the appearance of institutions themselves.

Full Text Available This paper presents the application of Neutrosophic Set Theory (NST in solving Generalized Assignment Problem (GAP. GAP has been solved earlier under fuzzy environment. NST is a generalization of the concept of classical set, fuzzy set, interval-valued fuzzy set, intuitionistic fuzzy set. Elements of Neutrosophic set are characterized by a truth-membership function, falsity and also indeterminacy which is a more realistic way of expressing the parameters in real life problem. Here the elements of the cost matrix for the GAP are considered as neutrosophic elements which have not been considered earlier by any other author. The problem has been solved by evaluating score function matrix and then solving it by Extremum Difference Method (EDM [1] to get the optimal assignment. The method has been demonstrated by a suitable numerical example.

The nature of gravity is fundamental to our understanding of our own solar system, the galaxy and the structure and evolution of the Universe. Einstein's generaltheory of relativity is the standard model that is used for almost ninety years to describe gravitational phenomena on these various scales. We review the foundations of general relativity, discuss the recent progress in the tests of relativistic gravity, and present motivations for high-accuracy gravitational experiments in space. We also summarize the science objectives and technology needs for the laboratory experiments in space with laboratory being the entire solar system. We discuss the advances in our understanding of fundamental physics anticipated in the near future and evaluate discovery potential for the recently proposed gravitational experiments.

We consider demand systems for utility-maximizing consumers facing general budget constraints whose utilities are perturbed by additive linear shifts in marginal utilities. Budgets are required to be compact but are not required to be convex. We define demand generating functions (DGF) whose...... subgradients with respect to these perturbations are convex hulls of the utility-maximizing demands. We give necessary as well as sufficient conditions for DGF to be consistent with utility maximization, and establish under quite general conditions that utility-maximizing demands are almost everywhere single......-valued and smooth in their arguments. We also give sufficient conditions for integrability of perturbed demand. Our analysis provides a foundation for applications of consumer theory to problems with nonlinear budget constraints....

The implications of restricted conformal invariance under conformal transformations preserving a plane boundary are discussed for general dimensions d. Calculations of the universal function of a conformal invariant ξ which appears in the two-point function of scalar operators in conformally invariant theories with a plane boundary are undertaken to first order in the ε=4-d expansion for the operator φ 2 in φ 4 theory. The form for the associated functions of ξ for the two-point functions for the basic field φ α and the auxiliary field λ in the N→∞ limit of the O(N) non-linear sigma model for any d in the range 2 α φ β and λλ. Using this method the form of the two-point function for the energy-momentum tensor in the conformal O(N) model with a plane boundary is also found. General results for the sum of the contributions of all derivative operators appearing in the operator product expansion, and also in a corresponding boundary operator expansion, to the two-point functions are also derived making essential use of conformal invariance. (orig.)

We show that generalized spherical harmonics are well suited for representing the space and orientation molecular density in the resolution of the molecular density functional theory. We consider the common system made of a rigid solute of arbitrary complexity immersed in a molecular solvent, both represented by molecules with interacting atomic sites and classical force fields. The molecular solvent density ρ(r,Ω) around the solute is a function of the position r≡(x,y,z) and of the three Euler angles Ω≡(θ,ϕ,ψ) describing the solvent orientation. The standard density functional, equivalent to the hypernetted-chain closure for the solute-solvent correlations in the liquid theory, is minimized with respect to ρ(r,Ω). The up-to-now very expensive angular convolution products are advantageously replaced by simple products between projections onto generalized spherical harmonics. The dramatic gain in speed of resolution enables to explore in a systematic way molecular solutes of up to nanometric sizes in arbitrary solvents and to calculate their solvation free energy and associated microscopic solvent structure in at most a few minutes. We finally illustrate the formalism by tackling the solvation of molecules of various complexities in water.

A sudden transition in a system from an inanimate state to the living state-defined on the basis of present day living organisms-would constitute a highly unlikely event hardly predictable from physical laws. From this uncontroversial idea, a self-consistent representation of the origin of life process is built up, which is based on the possibility of a series of intermediate stages. This approach requires a particular kind of stability for these stages-dynamic kinetic stability (DKS)-which is not usually observed in regular chemistry, and which is reflected in the persistence of entities capable of self-reproduction. The necessary connection of this kinetic behaviour with far-from-equilibrium thermodynamic conditions is emphasized and this leads to an evolutionary view for the origin of life in which multiplying entities must be associated with the dissipation of free energy. Any kind of entity involved in this process has to pay the energetic cost of irreversibility, but, by doing so, the contingent emergence of new functions is made feasible. The consequences of these views on the studies of processes by which life can emerge are inferred.

Purpose: The purpose of this paper is to review intuition in the context of organizational change. We argue that intuition as a concept requires attention and its formulation is necessary prior to its application in organizations. The paper provides a critique of Dual Process Theory and highlights shortcomings in organization theorizing of intuition.\\ud Design/methodology/approach: The paper is conceptual and provides in-depth theoretical discussions by drawing from the literature on decision...

Two cases are considered: (1) rigid body motion of an airfoil-flap combination consisting of vertical translation of given amplitude, rotation of given amplitude about a specified axis, and rotation of given amplitude of the control surface alone about its hinge; the upwash for this problem is defined mathematically; and (2) sinusoidal gust of given amplitude and wave number, for which the upwash is defined mathematically. Simple universal formulas are presented for the most important aerodynamic coefficients in unsteady thin airfoil theory. The lift and moment induced by a generalized gust are evaluated explicitly in terms of the gust wavelength. Similarly, in the control surface problem, the lift, moment, and hinge moments are given as explicit algebraic functions of hinge location. These results can be used together with any of the standard numerical inversion routines for the elementary loads (pitch and heave).

It is demonstrated that an infinite set of string-tree level on-shell Ward identities, which are valid to all σ-model loop orders, can be systematically constructed without referring to the string field theory. As examples, bosonic massive scattering amplitudes are calculated explicitly up to the second massive excited states. Ward identities satisfied by these amplitudes are derived by using zero-norm states in the spectrum. In particular, the inter-particle Ward identity generated by the D 2 xD 2' zero-norm state at the second massive level is demonstrated. The four physical propagating states of this mass level are then shown to form a large gauge multiplet. This result justifies our previous consideration on higher inter-spin symmetry from the generalized worldsheet σ-model point of view. (author)

We aim at assessing the validity limits of some simplifying hypotheses that, within a Riemmannian geometric framework, have provided an explanation of the origin of Hamiltonian chaos and have made it possible to develop a method of analytically computing the largest Lyapunov exponent of Hamiltonian systems with many degrees of freedom. Therefore, a numerical hypotheses testing has been performed for the Fermi-Pasta-Ulam beta model and for a chain of coupled rotators. These models, for which analytic computations of the largest Lyapunov exponents have been carried out in the mentioned Riemannian geometric framework, appear as paradigmatic examples to unveil the reason why the main hypothesis of quasi-isotropy of the mechanical manifolds sometimes breaks down. The breakdown is expected whenever the topology of the mechanical manifolds is nontrivial. This is an important step forward in view of developing a geometric theory of Hamiltonian chaos of general validity.

Almost 70 years ago, the first theoretical model for environmental effects on electronic excitation energies has been derived. Since then, several different interpretations and refined models have been proposed for the perichromic shift of a chromophore due to its surrounding medium. Some of these models are contradictory. Here, the contributing terms are derived within the framework of long-range perturbation theory with the least approximations so far. The derivation is based on a state-specific interpretation of the interaction energies and all terms can be identified with individual properties of either the chromophore or the surroundings, respectively. Further, the much debated contribution due to transition moments coupled to the environment can be verified in the form of a non-resonant excitonic coupling to the dynamic polarizabilities in the environment. These general insights should clarify discussions and interpretations of environmental effects on electronic excitations and should foster the development of new models for the computation of these effects.

In all nontrivial cases renormalization, as it is usually formulated, is not a change of integration variables in the functional integral, plus parameter redefinitions, but a set of replacements, of actions and/or field variables and parameters. Because of this, we cannot write simple identities relating bare and renormalized generating functionals, or generating functionals before and after nonlinear changes of field variables. In this paper we investigate this issue and work out a general field-covariant approach to quantum field theory, which allows us to treat all perturbative changes of field variables, including the relation between bare and renormalized fields, as true changes of variables in the functional integral, under which the functionals Z and W=lnZ behave as scalars. We investigate the relation between composite fields and changes of field variables, and we show that, if J are the sources coupled to the elementary fields, all changes of field variables can be expressed as J-dependent redefinitions of the sources L coupled to the composite fields. We also work out the relation between the renormalization of variable-changes and the renormalization of composite fields. Using our transformation rules it is possible to derive the renormalization of a theory in a new variable frame from the renormalization in the old variable frame, without having to calculate it anew. We define several approaches, useful for different purposes, in particular a linear approach where all variable changes are described as linear source redefinitions. We include a number of explicit examples. (orig.)

General systems theory can be applied to problems in the teaching of speech communication courses. The author describes general systems theory as it is applied to the designing, conducting and evaluation of speech communication courses. (Author/MS)

will face in an uncertain future. Complexity Theory , History, Practice, Military Theory , Leadership 14. SUBJECT TERMS 70 15. NUMBER OF PAGES...complexity theory : scale, adaptive leadership , and bottom up feedback from the agents (the soldiers in the field). These are all key sub components of...Approved for Public Release; Distribution is Unlimited COMPARING THEORY AND PRACTICE: AN APPLICATION OF COMPLEXITY THEORY TO GENERAL RIDGWAY’S

Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first four sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fifth section surveys the topological complications implied by non-mean-field-type social network structures in general. The next three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games.

The neutral theory of molecular evolution postulates that nucleotide substitutions inherently take place in DNA as a result of point mutations followed by random genetic drift. In the absence of selective constraints, the substitution rate reaches the maximum value set by the mutation rate. The rate in globin pseudogenes is about 5 x 10 to the -9th substitutions per site per year in mammals. Rates slower than this indicate the presence of constraints imposed by negative (natural) selection, which rejects and discards deleterious mutations.

Full Text Available Evolution is shaping the world around us. At the core of every evolutionary process is a population of reproducing individuals. The outcome of an evolutionary process depends on population structure. Here we provide a general formula for calculating evolutionary dynamics in a wide class of structured populations. This class includes the recently introduced "games in phenotype space" and "evolutionary set theory." There can be local interactions for determining the relative fitness of individuals, but we require global updating, which means all individuals compete uniformly for reproduction. We study the competition of two strategies in the context of an evolutionary game and determine which strategy is favored in the limit of weak selection. We derive an intuitive formula for the structure coefficient, sigma, and provide a method for efficient numerical calculation.

A possibility of obtaining the de Sitter and Nariai spacetimes in a generalizedtheory of gravitation (which was in succession proposed by Utiyama-DeWitt, Parker-Fulling-Hu and Gurovich-Starobinski) is examined. It is shown that the generalizedtheory with a suitable fixation of three parameters admit both spacetimes, just like the generaltheory of relativity. (author)

The axiomatic theory unites the aspects of neurophysiology, psychology and system-theory. The formulation of the structural-nucleus of the theory relies on basic insights from biology, neurophysiology and system-theory. The structural-nucleus allows the reconstruction of the essential properties of nervous system functions, organisation and development. The theory also contributes to the discussion of stochastic automata and artificial intelligence.

The formal theory of radiative processes in centrosymmetric coordination compounds of the Ln X 3+ is a trivalent lanthanide ion and X -1 =Cl -1 , Br -1 ) is put forward based on a symmetry vibronic crystal field-ligand polarisation model. This research considers a truncated basis set for the intermediate states of the central metal ion and have derived general master equations to account for both the overall observed spectral intensities and the measured relative vibronic intensity distributions for parity forbidden but vibronically allowed electronic transitions. In addition, a procedure which includes the closure approximation over the intermediate electronic states is included in order to estimate quantitative crystal field contribution to the total transition dipole moments of various and selected electronic transitions. This formalism is both general and flexible and it may be employed in any electronic excitations involving f N type configurations for the rare earths in centrosymmetric co-ordination compounds in cubic environments and also in doped host crystals belonging to the space group Fm 3m. (author)

Describes a sectoral and paradigmatic approach to evolutionary research. Argues that an evolutionary paradigm does not exist. Examines the socio-biological approach and that of a system-theoretical oriented generalevolutionarytheory. Utilizes the topics of cooperation, delimitation, and indoctrination to explain more promising ways of adoption.…

1. Stewardship of biological and ecological resources requires the ability to make integrative assessments of ecological integrity. One of the emerging methods for making such integrative assessments is multimetric indices (MMIs). These indices synthesize data, often from multiple levels of biological organization, with the goal of deriving a single index that reflects the overall effects of human disturbance. Despite the widespread use of MMIs, there is uncertainty about why this approach can be effective. An understanding of MMIs requires a quantitative theory that illustrates how the properties of candidate metrics relates to MMIs generated from those metrics. 2. We present the initial basis for such a theory by deriving the general mathematical characteristics of MMIs assembled from metrics. We then use the theory to derive quantitative answers to the following questions: Is there an optimal number of metrics to comprise an index? How does covariance among metrics affect the performance of the index derived from those metrics? And what are the criteria to decide whether a given metric will improve the performance of an index? 3. We find that the optimal number of metrics to be included in an index depends on the theoretical distribution of signal of the disturbance gradient contained in each metric. For example, if the rank-ordered parameters of a metric-disturbance regression can be described by a monotonically decreasing function, then an optimum number of metrics exists and can often be derived analytically. We derive the conditions by which adding a given metric can be expected to improve an index. 4. We find that the criterion defining such conditions depends nonlinearly of the signal of the disturbance gradient, the noise (error) of the metric and the correlation of the metric errors. Importantly, we find that correlation among metric errors increases the signal required for the metric to improve the index. 5. The theoretical framework presented in this

The magnetohydrodynamic wave emission from several localized, periodic, kinematically specified fluid velocity fields are calculated using Lighthill's method for finding the far-field wave forms. The waves propagate through an isothermal and uniform plasma with a constant B field. General properties of the energy flux are illustrated with models of pulsating flux tubes and convective rolls. Interference theory from geometrical optics is used to find the direction of minimum fast-wave emission from multipole sources and slow-wave emission from discontinuous sources. The distribution of total flux in fast and slow waves varies with the ratios of the source dimensions l to the acoustic and Alfven wavelengths.

Two major obstacles hinder the application of evolutionarytheory to the origin of eukaryotes. The first is more apparent than real?the endosymbiosis that led to the mitochondrion is often described as ?non-Darwinian? because it deviates from the incremental evolution championed by the modern synthesis. Nevertheless, endosymbiosis can be accommodated by a multi-level generalization of evolutionarytheory, which Darwin himself pioneered. The second obstacle is more serious?all of the major fea...

Asian and European wild boars were independently domesticated ca. 10,000 years ago. Since the 17th century, Chinese breeds have been imported to Europe to improve the genetics of European animals by introgression of favourable alleles, resulting in a complex mosaic of haplotypes. To interrogate the structure of these haplotypes further, we have run a new haplotype segregation analysis based on information theory, namely compression efficiency (CE). We applied the approach to sequence data from individuals from each phylogeographic region (n = 23 from Asia and Europe) including a number of major pig breeds. Our genome-wide CE is able to discriminate the breeds in a manner reflecting phylogeography. Furthermore, 24,956 non-overlapping sliding windows (each comprising 1,000 consecutive SNP) were quantified for extent of haplotype sharing within and between Asia and Europe. The genome-wide distribution of extent of haplotype sharing was quite different between groups. Unlike European pigs, Asian pigs haplotype sharing approximates a normal distribution. In line with this, we found the European breeds possessed a number of genomic windows of dramatically higher haplotype sharing than the Asian breeds. Our CE analysis of sliding windows capture some of the genomic regions reported to contain signatures of selection in domestic pigs. Prominent among these regions, we highlight the role of a gene encoding the mitochondrial enzyme LACTB which has been associated with obesity, and the gene encoding MYOG a fundamental transcriptional regulator of myogenesis. The origin of these regions likely reflects either a population bottleneck in European animals, or selective targets on commercial phenotypes reducing allelic diversity in particular genes and/or regulatory regions.

The paper first discusses the influence of Labov on certain recent Chomskyan developments, starting from an identification of two radically different readings of the relationship between Labovian variationist sociolinguistics and the dominant theoretical paradigm of the latter half of the 20th ce...... argue that this calls for a broader definition of sociolinguistics than just variationism and poses demands for both internal integration, viz. of linguistic disciplines, and external integration of the language sciences with evolutionary psychology, anthropology and social history....

Full Text Available Peer-to-Peer (P2P file sharing is one of key technologies for achieving attractive P2P multimedia social networking. In P2P file-sharing systems, file availability is improved by cooperative users who cache and share files. Note that file caching carries costs such as storage consumption and processing load. In addition, users have different degrees of cooperativity in file caching and they are in different surrounding environments arising from the topological structure of P2P networks. With evolutionary game theory, this paper evaluates the performance of P2P file sharing systems in such heterogeneous environments. Using micro-macro dynamics, we analyze the impact of the heterogeneity of user selfishness on the file availability and system stability. Further, through simulation experiments with agent-based dynamics, we reveal how other aspects, for example, synchronization among nodes and topological structure, affect the system performance. Both analytical and simulation results show that the environmental heterogeneity contributes to the file availability and system stability.

Evolutionary game theory has become one of the most diverse and far reaching theories in biology. Applications of this theory range from cell dynamics to social evolution. However, many applications make it clear that inherent non-linearities of natural systems need to be taken into account. One way of introducing such non-linearities into evolutionary games is by the inclusion of multiple players. An example is of social dilemmas, where group benefits could e.g.\\ increase less than linear wi...

It was proved in a previous paper that a generalized circulation theorem characterizes Einstein's theory of gravitation as a special case of a more generaltheory of gravitation, which is also based on the principle of equivalence. Here the question of whether it is possible to weaken this circulation theorem in such ways that it would imply more generaltheories than Einstein's is posed. This problem is solved. Principally, there are two possibilities. One of them is essentially Weyl's theory. (author)

Highlights: → A new method is presented which corrects for core environment error from specular boundaries at the lattice cell level. → Solution obtained with generalized energy condensation provides improved approximation to the core level fine-group flux. → Iterative recondensation of the cross sections and unfolding of the flux provides on-the-fly updating of the core cross sections. → Precomputation of energy integrals and fine-group cross sections allows for easy implementation and efficient solution. → Method has been implemented in 1D and shown to correct the environment error, particularly in strongly heterogeneous cores. - Abstract: The standard multigroup method used in whole-core reactor analysis relies on energy condensed (coarse-group) cross sections generated from single lattice cell calculations, typically with specular reflective boundary conditions. Because these boundary conditions are an approximation and not representative of the core environment for that lattice, an error is introduced in the core solution (both eigenvalue and flux). As current and next generation reactors trend toward increasing assembly and core heterogeneity, this error becomes more significant. The method presented here corrects for this error by generating updated coarse-group cross sections on-the-fly within whole-core reactor calculations without resorting to additional cell calculations. In this paper, the fine-group core flux is unfolded by making use of the recently published Generalized Condensation Theory and the cross sections are recondensed at the whole-core level. By iteratively performing this recondensation, an improved core solution is found in which the core-environment has been fully taken into account. This recondensation method is both easy to implement and computationally very efficient because it requires precomputation and storage of only the energy integrals and fine-group cross sections. In this work, the theoretical basis and development

We perform a linear perturbation analysis for black hole solutions with a 'massive' Yang-Mills field (the Proca field) in Brans-Dicke theory and find that the results are quite consistent with those via catastrophe theory where thermodynamic variables play an intrinsic role. Based on this observation, we show the general relation between these two methods in generalizedtheories of gravity which are conformally related to the Einstein-Hilbert action

A detailed comparison of newtonian approximation of the Einstein theory and the Newton theory of gravity is made. A difference of principle between these two theories is clarified at the stage of obtaining integrals of motion. Exact eqautions of motion and Einstein equations shows the existence only zero integrals of motion as well as in the newtonian approximation. A conclusion is that GRT has no classical newtonian limit, since the integrals of motion in the Newton theory of gravity and in the newtonian approximation of the Einstein theory do not coincide [ru

Full Text Available Keynes’s GeneralTheory argues there is no self-regulating mechanism that guarantees full employment. Keynes’s vision has been distorted by mainstream Keynesians to mean that it is the warts on the body of capitalism, not capitalism itself, that are the problem: frictions and imperfections and rigidities may interfere with the mechanism for self-regulation that inheres in the perfectly competitive model. This distortion has two supposed corollaries, first, that the more the economy resembles the textbook model of perfect competition, the less likely are lapses from full employment; second, that since imperfections are limited to the short run, so are lapses from full employment.Keynes was unable to convince the economics profession that the problem is capitalism; that the warts, real though they are, obscure a more fundamental problem. The reason is that Keynes lacked the mathematical tools to substantiate his vision. This paper deploys tools that were unavailable to Keynes, in order to lay the foundations of a Keynesian macroeconomics for the 21st century. Keywords: Keynes, Dynamic vs static models, Flexprice adjustment, Fixprice adjustment, JEL codes: B22, B41, E12

One of the most important goals of neuroscience is to establish precise structure-function relationships in the brain. Since the 19th century, a major scientific endeavour has been to associate structurally distinct cortical regions with specific cognitive functions. This was traditionally accomplished by correlating microstructurally defined areas with lesion sites found in patients with specific neuropsychological symptoms. Modern neuroimaging techniques with high spatial resolution have promised an alternative approach, enabling non-invasive measurements of regionally specific changes of brain activity that are correlated with certain components of a cognitive process. Reviewing classic approaches towards brain structure-function relationships that are based on correlational approaches, this article argues that these approaches are not sufficient to provide an understanding of the operational principles of a dynamic system such as the brain but must be complemented by models based on general system theory. These models reflect the connectional structure of the system under investigation and emphasize context-dependent couplings between the system elements in terms of effective connectivity. The usefulness of system models whose parameters are fitted to measured functional imaging data for testing hypotheses about structure-function relationships in the brain and their potential for clinical applications is demonstrated by several empirical examples.

General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

The nucleon generalized polarizabilities (GPs), probed in virtual Compton scattering (VCS), describe the spatial distribution of the polarization density in a nucleon. They are accessed experimentally via the process of electron-proton bremsstrahlung (ep → epγ) at electron-beam facilities, such as MIT-Bates, CEBAF (Jefferson Lab), and MAMI (Mainz). We present the calculation of the nucleon GPs and VCS observables at next-to-leading order in baryon chiral perturbation theory (BχPT), and confront the results with the empirical information. At this order our results are predictions, in the sense that all the parameters are well known from elsewhere. Within the relatively large uncertainties of our calculation we find good agreement with the experimental observations of VCS and the empirical extractions of the GPs. We find large discrepancies with previous chiral calculations - all done in heavy-baryon χPT (HBχPT) - and discuss the differences between BχPT and HBχPT responsible for these discrepancies. (orig.)

General Strain Theory (GST: Agnew Criminology 30:47-87, 1992) posits that deviant behaviour results from adaptation to strain and the consequent negative emotions. Empirical research on GST has mainly focused on aggressive behaviours, while only few research studies have considered alternative manifestations of deviance, like substance use and gambling. The aim of the present study is to test the ability of GST to explain gambling behaviours and substance use. Also, the role of family in promoting the adoption of gambling and substance use as coping strategies was verified. Data from 266 families with in mean 8 observations for each group were collected. The multilevel nature of the data was verified before appropriate model construction. The clustered nature of gambling data was analysed by a two-level Hierarchical Linear Model while substance use was analysed by Multivariate Linear Model. Results confirmed the effect of strain on gambling and substance use while the positive effect of depressive emotions on these behaviours was not supported. Also, the impact of family on the individual tendency to engage in addictive behaviours was confirmed only for gambling.

We present a general approach to construct a class of generalized topological field theories with constraints by means of generalized differential calculus and its application to connection theory. It turns out that not only the ordinary BF formulations of general relativity and Yang-Mills theories, but also the N=1,2 chiral supergravities can be reformulated as these constrained generalized topological field theories once the free parameters in the Lagrangian are specially chosen. We also show that the Chern-Simons action on the boundary may naturally be induced from the generalized topological action in the bulk, rather than introduced by hand

This paper reports theoretical economic production work and uses electricity futures trading to illustrate its argument. The focus is relationships between time, production and tradition both in Nicholas Georgescu-Roegen's analytical representation of the production process (i.e., flow/fund model) and in his dialectical scheme dealing with the evolutionary changes in the economic process. Our main arguments are (1) the flow/fund model is designed to be employed in conjunction with attention to how the boundaries of a given process are determined and (2) process boundaries are dialectical distinctions - between process and not-process - that are strongly related to time and tradition. We propose that Georgescu-Roegen's The Entropy Law and the Economic Process is best understood as the elaboration of a generaltheory of economic production and we developed two conceptual tools (time {open_square} and meta-funds), both of which are related to the dialectical distinction between process and not-process, which we use to operationalise this generaltheory. Finally, we demonstrate that, although trading in electricity futures is surprising if one uses a stock/flow vs services distinction (because electricity supply is classed as a service) it appears perfectly logical under Georgescu-Roegen's generaltheory: shortening time horizons, combined with a shift in the relationship between raw fuel supplies and power production procedures, lead to a shift in the status of electricity supply, from fund to flow. (author)

Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…

Throughout his career as a writer, Sigmund Freud maintained an interest in the evolutionary origins of the human mind and its neurotic and psychotic disorders. In common with many writers then and now, he believed that the evolutionary past is conserved in the mind and the brain. Today the "evolutionary Freud" is nearly forgotten. Even among Freudians, he is regarded to be a red herring, relevant only to the extent that he diverts attention from the enduring achievements of the authentic Freud. There are three ways to explain these attitudes. First, the evolutionary Freud's key work is the "Overview of the Transference Neurosis" (1915). But it was published at an inopportune moment, forty years after the author's death, during the so-called "Freud wars." Second, Freud eventually lost interest in the "Overview" and the prospect of a comprehensive evolutionarytheory of psychopathology. The publication of The Ego and the Id (1923), introducing Freud's structural theory of the psyche, marked the point of no return. Finally, Freud's evolutionarytheory is simply not credible. It is based on just-so stories and a thoroughly discredited evolutionary mechanism, Lamarckian use-inheritance. Explanations one and two are probably correct but also uninteresting. Explanation number three assumes that there is a fundamental difference between Freud's evolutionary narratives (not credible) and the evolutionary accounts of psychopathology that currently circulate in psychiatry and mainstream journals (credible). The assumption is mistaken but worth investigating.

Women who use hormonal contraceptives have been shown to report more intense affective responses to partner infidelity than women with a natural cycle. Also, previous research suggests that female jealousy is sensitive to hormonal changes when naturally cycling, with a peak around ovulation, while women using hormonal contraceptives are less sensitive. This research is aimed at exploring women`s perception of couple conflicts in line with predictions derived from evolutionarytheory. A fa...

Topological field theories can be formulated by beginning from a higher dimensional action. The additional dimension is an unphysical time parameter and the action is the derivative of a functional W with respect to this variable. In the d = 4 case, it produces actions which are shown to give topological quantum field theory after gauge fixing. In d = 3 this action leads to the Hamiltonian, which yields the Floer groups if the additional parameter is treated as physical when W is the pure Chern-Simons action. This W can be used to define a topological quantum field theory in d = 3 by treating the additional parameter as unphysical. The BFV-BRST operator quantization of this theory yields to an enlarged system which has only first class constraints. This is not identical to the previously introduced d = 3 topological quantum field theory, even if it is shown that the latter theory also gives the theory which we began with, after a partial gauge fixing. (author). 18 refs

At present, superstring theory is the only candidate to be a unified theory of all fundamental interactions. For this reason, the various aspects of the string theory have been attracting great attention. String theory has a nontrivial gauge symmetry and therefore is an interesting object from the viewpoint of application of general quantization methods. This paper discusses the bosonic string theory. The purpose of this paper is a consistent operator quantization of the theory with the action. The natural basis for it is provided by the method of the generalized canonical quantization

Evolutionary graph theory was recently proposed by Lieberman et al. in 2005. In the previous papers about evolutionary graphs (EGs), the fitness of the residents in the EGs is in general assumed to be unity, and the fitness of a mutant is assumed to be a constant r. We aim to extend EG to general cases in this paper, namely, the fitness of a mutant is heavily dependent upon frequency. The corresponding properties for these new EGs are analyzed, and the fixation probability is obtained for large population.

We construct analytic solutions of open bosonic string field theory for any exactly marginal deformation in any boundary conformal field theory when properly renormalized operator products of the marginal operator are given. We explicitly provide such renormalized operator products for a class of marginal deformations which include the deformations of flat D-branes in flat backgrounds by constant massless modes of the gauge field and of the scalar fields on the D-branes, the cosine potential for a space-like coordinate, and the hyperbolic cosine potential for the time-like coordinate. In our construction we use integrated vertex operators, which are closely related to finite deformations in boundary conformal field theory, while previous analytic solutions were based on unintegrated vertex operators. We also introduce a modified star product to formulate string field theory around the deformed background. (orig.)

Nobel Laureates with their contributions to the development of the theory of general equilibrium have enabled this theory to be one of the most important for theoretical and practical analysis of the overall economy and the efficient use of economic resources. Results of the research showing that contributions of Nobel Laureates in the economy belong to two main frameworks of development of the general equilibrium theory: one was the mathematical model of general equilibrium developed by J...

This textbook provides a comprehensive introduction to nature-inspired metaheuristic methods for search and optimization, including the latest trends in evolutionary algorithms and other forms of natural computing. Over 100 different types of these methods are discussed in detail. The authors emphasize non-standard optimization problems and utilize a natural approach to the topic, moving from basic notions to more complex ones. An introductory chapter covers the necessary biological and mathematical backgrounds for understanding the main material. Subsequent chapters then explore almost all of the major metaheuristics for search and optimization created based on natural phenomena, including simulated annealing, recurrent neural networks, genetic algorithms and genetic programming, differential evolution, memetic algorithms, particle swarm optimization, artificial immune systems, ant colony optimization, tabu search and scatter search, bee and bacteria foraging algorithms, harmony search, biomolecular computin...

This paper reports two distinct but related advances: (1) The development and application of fluid theories that transcend conventional magnetohydrodynamics (MHD), in particular, theories that are valid in the long-mean-free-path limit and in which pressure anisotropy, heat flow, and arbitrarily strong sheared flows are treated consistently. (2) The discovery of new pressure-confining plasma configurations that are self-organized relaxed states. (author)

The evolution of socio-economic systems depend on the interdependent decision processes of its underlying system components. The mathematical model to describe the strategic decision of players within a socio-economic game is ''game theory''. ''Quantum game theory'' is a mathematical and conceptual amplification of classical game theory. The space of all conceivable decision paths is extended from the purely rational, measurable space in the Hilbert-space of complex numbers - which is the mathematical space where quantum theory is formulated. By the concept of a potential entanglement of the imaginary quantum strategy parts, it is possible to include cooperate decision path, caused by cultural or moral standards. If this strategy entanglement is large enough, then additional Nash equilibria can occur, previously present dominant strategies could become nonexistent and new evolutionary stable strategies do appear for some game classes. Within this PhD thesis the main results of classical and quantum games are summarized and all of the possible game classes of evolutionary (2 player)-(2 strategy) games are extended to quantum games. It is shown that the quantum extension of classical games with an underlying dilemma-like structure give different results, if the strength of strategic entanglement is above a certain barrier. After the German summary and the introduction paper, five different applications of the theory are discussed within the thesis. (orig.)

In Topics in the Foundations of General Relativity and Newtonian Gravitation Theory, David B. Malament presents the basic logical-mathematical structure of general relativity and considers a number of special topics concerning the foundations of general relativity and its relation to Newtonian gravitation theory. These special topics include the geometrized formulation of Newtonian theory (also known as Newton-Cartan theory), the concept of rotation in general relativity, and Gödel spacetime. One of the highlights of the book is a no-go theorem that can be understood to show that there is

We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

Full Text Available We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

Full Text Available In this paper, we propose a new framework for quantum field theory in terms of consistency conditions. The consistency conditions that we consider are ''associativity'' or ''factorization'' conditions on the operator product expansion (OPE of the theory, and are proposed to be the defining property of any quantum field theory. Our framework is presented in the Euclidean setting, and is applicable in principle to any quantum field theory, including non-conformal ones. In our framework, we obtain a characterization of perturbations of a given quantum field theory in terms of a certain cohomology ring of Hochschild-type. We illustrate our framework by the free field, but our constructions are general and apply also to interacting quantum field theories. For such theories, we propose a new scheme to construct the OPE which is based on the use of non-linear quantized field equations.

The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

Full Text Available The 2008 international financial crisis triggered retrospection on both theory and policy, reaching a macroeconomic consensus that the financial system plays an important role in the macro economy and macroeconomic theory must be restructured to incorporate endogenous financial factors. Reflecting on the inherent flaws of traditional mainstream economics, this paper puts forward a “macrofinance” proposition as a new paradigm for macro financial analysis. As a scientific methodology based on systematic logic, the major feature of the macrofinance framework is that we must analyze the financial system as a core part of a complete and endogenous analytical framework, instead of only focusing on the money or credit. The goal of “macrofinance” is to return to scientific economic methodologies by analyzing the inherent laws of modern financial systems to set up a comprehensive theoretical framework that unifies the financial sector with the real economy and combines theory and policy practice.

This is the first in a series of papers, the overall objective of which is the formulation of a new covariant approach to nonequilibrium statistical mechanics in classical general relativity. The objecct here is the development of a tractable theory for self-gravitating systems. It is argued that the ''state'' of an N-particle system may be characterized by an N-particle distribution function, defined in an 8N-dimensional phase space, which satisfies a collection of N conservation equations. By mapping the true physics onto a fictitious ''background'' spacetime, which may be chosen to satisfy some ''average'' field equations, one then obtains a useful covariant notion of ''evolution'' in response to a fluctuating ''gravitational force.'' For many cases of practical interest, one may suppose (i) that these fluctuating forces satisfy linear field equations and (ii) that they may be modeled by a direct interaction. In this case, one can use a relativistic projection operator formalism to derive exact closed equations for the evolution of such objects as an appropriately defined reduced one-particle distribution function. By capturing, in a natural way, the notion of a dilute gas, or impulse, approximation, one is then led to a comparatively simple equation for the one-particle distribution. If, furthermore, one treats the effects of the fluctuating forces as ''localized'' in space and time, one obtains a tractable kinetic equation which reduces, in the Newtonian limit, to the stardard Landau equation

We develop a theory for stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We attach these problems by viewing them within a game theoretic framework, and we look for Nash subgame perfect equilibrium points...... examples of time inconsistency in the literature are easily seen to be special cases of the present theory. We also prove that for every time inconsistent problem, there exists an associated time consistent problem such that the optimal control and the optimal value function for the consistent problem...

This paper outlines the general practice world view and introduces the main features of the theories of 'chaos' and complexity. From this, analogies are drawn between general practice and the theories, which suggest a different way of understanding general practice and point to future developments in general practice research. A conceptual and practical link between qualitative and quantitative methods of research is suggested. Methods of combining data about social context with data about in...

The author shows that the secular equation in KKR (Korringa, Kohn and Rostoker) theory retains its separable structure also in the case of non-muffin-tin potentials. This generalisation has been extensively discussed recently. During this discussion, in which the possible necessity of so-called near

The canonical commutation relations are analyzed in detail in the indefinite-metric quantum field theory of gravity based on the vierbein formalism. It is explicitly verified that the BRS charge, the local-Lorentz-BRS charge and the Poincare generators satisfy the expected commutation relations. (author)

Cyber bullying has become more pervasive as a result of advances in communication technology such as email, text messaging, chat rooms, and social media sites. Despite the growth in research on correlates associated with engagement in cyber bullying, few studies test the applicability of criminological theories to explain engagement in cyber…

A hystory of gradual unification of general relativity theory and quantum field theory on the basis of unified geometrical principles is detected. The gauge invariance principles became universal for construction of all physical theories. Quantum mechanics, electrodynamics and Einstein gravitation theory were used to form geometrical principles. Identity of inertial and gravitational masses is an experimental basis of the general relativity theory (GRT). It is shown that correct understanding of GRT bases is a developing process related to the development of the present physics and stimulating this development

A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)

A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)

A perturbation expression to calculate the variations in the rates of integral parameters (such as reaction rates) of a reactor using a Time-Independent Generalized Perturbation Theory, was developed. This theory makes use of the concepts of neutron generation and neutron importance with respect to a given process occurring in a system. The application of Time-Dependent Generalized Perturbation Theory to the calculation of Burnup, by using the expressions derived by A. Gandini, along with the perturbation expression derived in the Time Independent Generalized Perturbation Theory, is done. (Author) [pt

Evolutionary dynamics describe how the population composition changes in response to the fitness levels, resulting in a closed-loop feedback system. Recent work established a connection between passivity theory and certain classes of population games, namely so-called “stable games”. In particular, it was shown that a combination of stable games and (an analogue of) passive evolutionary dynamics results in stable convergence to Nash equilibrium. This paper considers the converse question of necessary conditions for evolutionary dynamics to exhibit stable behaviors for all generalized stable games. Using methods from robust control analysis, we show that if an evolutionary dynamic does not satisfy a passivity property, then it is possible to construct a generalized stable game that results in instability. The results are illustrated on selected evolutionary dynamics with particular attention to replicator dynamics, which are also shown to be lossless, a special class of passive systems.

General systems theory is a set of related definitions, assumptions, and propositions which deal with reality as an integrated hierarchy of organizations of matter and energy. In this paper, the author defines the concepts of space, time, matter, energy, and information in terms of their meaning in general systems theory. He defines a system as a…

A generaltheory of fluidelastic instability for a tube array in crossflow is presented. Various techniques to obtain the motion-dependent fluid-force coefficients are discussed and the general instability characteristics are summarized. The theory is also used to evaluate the results of other mathematical models for crossflow-induced instability.

-defined metaphors of individual learning and social imitation processes, from which a revised theory of convention may be erected (see Sugden 2004, Binmore 1993 and Young 1998). This paper makes a general argument in support of the evolutionary turn in the theory of convention by a progressive exposition of its...... in Aumann (1976) and which, together with the assumptions of perfect rationality, came to be defining of classical game theory. However, classical game theory is currently undergoing severe crisis as a tool for exploring social phenomena; a crisis emerging from the problem of equilibrium selection around......Some thirty years ago Lewis published his Convention: A Philosophical Study (Lewis, 2002). This laid the foundation for a game-theoretic approach to social conventions, but became more famously known for its seminal analysis of common knowledge; the concept receiving its canonical analysis...

The entire treatment presented here is framed by questions which led to and now lead out of the generaltheory of relativity: can an absolute acceleration be defined meaningfully? Do gravitational effects propagate with infinite velocity as Newton required? Can the generaltheory correctly reflect the dynamics of the whole universe while consistently describing stellar evolution? Can a theory which presupposes measurement of properties of space through the interaction of matter be made compatible with a theory in which dimensions of the objects measured are so small that location loses meaning? The book gives the mathematics necessary to understand the theory and begins in Riemannian geometry. Contents, abridged: Foundations of Riemannian geometry. Foundations of Einstein's theory of gravitation. Linearised theory of gravitation, far fields and gravitational waves. Invariant characterisation of exact solutions. Gravitational collapse and black holes. Cosmology. Non-Einsteinian theories of gravitation. Index

We present a new normal-mode formalism for computing the response of an aspherical, self-gravitating, linear viscoelastic earth model to an arbitrary surface load. The formalism makes use of recent advances in the theory of the Earth's free oscillations, and is based upon an eigenfunction expansion methodology, rather than the tradi-tional Love-number approach to surface-loading problems. We introduce a surface-load representation theorem analogous to Betti's reciprocity relation in seismology. Taking advantage of this theorem and the biorthogonality of the viscoelastic modes, we determine the complete response to a surface load in the form of a Green's function. We also demonstrate that each viscoelastic mode has its own unique energy partitioning, which can be used to characterize it. In subsequent papers, we apply the theory to spherically symmetric and aspherical earth models.

Each chapter ends with a list of references for further reading. Undoubtedly, these will be useful for anyone who wishes to pursue the topics deeper. … the book has many MATLAB examples and problems presented at appropriate places. … the book will become a widely used classroom text for a second course on linear algebra. It can be used profitably by graduate and advanced level undergraduate students. It can also serve as an intermediate course for more advanced texts in matrix theory. This is a lucidly written book by two authors who have made many contributions to linear and multilinear algebra.-K.C. Sivakumar, IMAGE, No. 47, Fall 2011Always mathematically constructive, this book helps readers delve into elementary linear algebra ideas at a deeper level and prepare for further study in matrix theory and abstract algebra.-L'enseignement Mathématique, January-June 2007, Vol. 53, No. 1-2.

The purpose of this study is to shed light on theorigins of the endogenous money theory and analyze the currentdebates on this topic. Endogenous money approach depends on a fundamental postulate: As banks meet the credit needs ofnon-financial businesses, new deposits emerge in the banking sector. Similarly,as the necessary reserves found for these new deposits the broad money expandsas well. Even though the central bank can intervene into this process it cannotfully control it. There...

Cantor proved in 1874 [Cantor G (1874) J Reine Angew Math 77:258-262] that the continuum is uncountable, and Hilbert's first problem asks whether it is the smallest uncountable cardinal. A program arose to study cardinal invariants of the continuum, which measure the size of the continuum in various ways. By Gödel [Gödel K (1939) Proc Natl Acad Sci USA 25(4):220-224] and Cohen [Cohen P (1963) Proc Natl Acad Sci USA 50(6):1143-1148], Hilbert's first problem is independent of ZFC (Zermelo-Fraenkel set theory with the axiom of choice). Much work both before and since has been done on inequalities between these cardinal invariants, but some basic questions have remained open despite Cohen's introduction of forcing. The oldest and perhaps most famous of these is whether " p = t," which was proved in a special case by Rothberger [Rothberger F (1948) Fund Math 35:29-46], building on Hausdorff [Hausdorff (1936) Fund Math 26:241-255]. In this paper we explain how our work on the structure of Keisler's order, a large-scale classification problem in model theory, led to the solution of this problem in ZFC as well as of an a priori unrelated open question in model theory.

We consider a natural generalization of trinification to theories with 3N SU(3) gauge groups. These theories have a simple moose representation and a gauge boson spectrum that can be interpreted via the deconstruction of a 5D theory with unified symmetry broken on a boundary. Although the matter and Higgs sectors of the theory have no simple extra-dimensional analog, gauge unification retains features characteristic of the 5D theory. We determine possible assignments of the matter and Higgs fields to unified multiplets and present theories that are viable alternatives to minimal trinified GUTs

The chase after a world formula is presently the most iridescent task for natural science. By the development of a radical new scientistic theory, unifying not only relativity and quantum theory as also astrophysics and string theory to a common view, the author lances the first serious candidate for a TOE (Theory of Everything) in the scientific discussion. The GeneralTheory of Duality (GDT) offers not only surprising answers to fundamental questions of physics, but also discovers the smallest component of our universe, which is still known since a longer time, which we ignored: Planck's Constant. May be possible that by this book a new world view in physics can be created. (GL)

A theory of Bose-Einstein condensation of light in a dye-filled optical microcavity is presented. The theory is based on the hierarchical maximum entropy principle and allows one to investigate the fluctuating behavior of the photon gas in the microcavity for all numbers of photons, dye molecules, and excitations at all temperatures, including the whole critical region. The master equation describing the interaction between photons and dye molecules in the microcavity is derived and the equivalence between the hierarchical maximum entropy principle and the master equation approach is shown. The cases of a fixed mean total photon number and a fixed total excitation number are considered, and a much sharper, nonparabolic onset of a macroscopic Bose-Einstein condensation of light in the latter case is demonstrated. The theory does not use the grand canonical approximation, takes into account the photon polarization degeneracy, and exactly describes the microscopic, mesoscopic, and macroscopic Bose-Einstein condensation of light. Under certain conditions, it predicts sub-Poissonian statistics of the photon condensate and the polarized photon condensate, and a universal relation takes place between the degrees of second-order coherence for these condensates. In the macroscopic case, there appear a sharp jump in the degrees of second-order coherence, a sharp jump and kink in the reduced standard deviations of the fluctuating numbers of photons in the polarized and whole condensates, and a sharp peak, a cusp, of the Mandel parameter for the whole condensate in the critical region. The possibility of nonclassical light generation in the microcavity with the photon Bose-Einstein condensate is predicted.

We propose a model for the description of the hot electron phenomena in semiconductors. Based on this model we are able to reproduce accurately the main characteristics observed in experiments of electric field transport, optical absorption, steady state photoluminescence and relaxation process. Our theory does not contain free nor adjustable parameters, it is very fast computerwise, and incorporates the main collision mechanisms including screening and phonon heating effects. Our description on a set of nonlinear rate equations in which the interactions are represented by coupling coefficients or effective frequencies. We calculate three coefficients from the characteristic constants and the band structure of the material. (author). 22 refs, 5 figs, 1 tab

Homogenization is not about periodicity, or Gamma-convergence, but about understanding which effective equations to use at macroscopic level, knowing which partial differential equations govern mesoscopic levels, without using probabilities (which destroy physical reality); instead, one uses various topologies of weak type, the G-convergence of Sergio Spagnolo, the H-convergence of Francois Murat and the author, and some responsible for the appearance of nonlocal effects, which many theories in continuum mechanics or physics guessed wrongly. For a better understanding of 20th century science,

A diagrammatic many-body perturbation theory applicable to arbitrary model spaces is presented. The necessity of having a complete model space (all possible occupancies of the partially-filled shells) is avoided. This requirement may be troublesome for systems with several well-spaced open shells, such as most atomic and molecular excited states, as a complete model space spans a very broad energy range and leaves out states within that range, leading to poor or no convergence of the perturbation series. The method presented here would be particularly useful for such states. The solution of a model problem (He 2 excited Σ + sub(g) states) is demonstrated. (Auth.)

Full Text Available The partial KKM principle for an abstract convex space is an abstract form of the classical KKM theorem. In this paper, we derive generalized forms of the Ky Fan minimax inequality, the von Neumann-Sion minimax theorem, the von Neumann-Fan intersection theorem, the Fan-type analytic alternative, and the Nash equilibrium theorem for abstract convex spaces satisfying the partial KKM principle. These results are compared with previously known cases for G-convex spaces. Consequently, our results unify and generalize most of previously known particular cases of the same nature. Finally, we add some detailed historical remarks on related topics.

We extend the Einstein-aether theory to include the Maxwell field in a nontrivial manner by taking into account its interaction with the time-like unit vector field characterizing the aether. We also include a generic matter term. We present a model with a Lagrangian that includes cross-terms linear and quadratic in the Maxwell tensor, linear and quadratic in the covariant derivative of the aether velocity four-vector, linear in its second covariant derivative and in the Riemann tensor. We decompose these terms with respect to the irreducible parts of the covariant derivative of the aether velocity, namely, the acceleration four-vector, the shear and vorticity tensors, and the expansion scalar. Furthermore, we discuss the influence of an aether non-uniform motion on the polarization and magnetization of the matter in such an aether environment, as well as on its dielectric and magnetic properties. The total self-consistent system of equations for the electromagnetic and the gravitational fields, and the dynamic equations for the unit vector aether field are obtained. Possible applications of this system are discussed. Based on the principles of effective field theories, we display in an appendix all the terms up to fourth order in derivative operators that can be considered in a Lagrangian that includes the metric, the electromagnetic and the aether fields.

For two decades, research has been suggested and conducted into the causation and development of cancers in seemingly diverse and unrelated populations such as blind individuals, shift-workers, flight personnel, Arctic residents and subsets of sleepers. One common denominator of these investigations is "melatonin". Another common denominator is that all these studies implicitly pursued the validity of the so-called "melatonin hypothesis", of a corollary and of associated predictions which can be united in our proposed theory of "carcinogenesis due to chronodisruption". The new theory suggests that the various predictions investigated between 1987 and 2008 represent different aspects of the same problem. Indeed, abundant experimental evidence supports the notion that the final common cause of many cases of cancer may be what has been termed chronodisruption (CD), a relevant disturbance of the temporal organization or order of physiology, endocrinology, metabolism and behaviour. While melatonin as a key time messenger and time keeper can be a marker of CD, it is probably only partially related to the differential cancer occurrence apparent in individuals who chronically or frequently experience an excess or deficit of chronodisruption.

Overlaps and matrix elements of one and two-body operators are calculated in a space spanned by multiphonons of different types taking properly the Pauli principle into account. Two methods are developped: a generalized Wick's theorem dealing with new contractions and recursion formulas well suited for numerical applications

Starting from an infinitesimal transformation expressed with a Killing vector and using systematically the formalism of the local tetrades, we show that, in the area of the general relativity, the Dirac equation may be formulated only versus the four local vectors which determine the gravitational potentials, their gradients and the 4-vector potential of the electromagnetic field [fr

In this paper, we discuss the implication of the generalized virial relations in the spectral analysis of Liouville operators. In particular, we refer to the existence problem of the analytic continuation of these super-operators and their resolvents occurring in the reduced dynamics description of open systems. For completeness, we outline the main ideas of the subdynamics approach. (author)

A literature survey on the Earth's magnetic field, citing the works of Gauss, Erman-Petersen, Quintus Icilius and Neumayer is presented. The general formulas for the representation of the potential and components of the Earth's magnetic force are presented. An analytical representation of magnetic condition of the Earth based on observations is also made.

This letter investigates the supervised learning problem with observations drawn from certain general stationary stochastic processes. Here by general, we mean that many stationary stochastic processes can be included. We show that when the stochastic processes satisfy a generalized Bernstein-type inequality, a unified treatment on analyzing the learning schemes with various mixing processes can be conducted and a sharp oracle inequality for generic regularized empirical risk minimization schemes can be established. The obtained oracle inequality is then applied to derive convergence rates for several learning schemes such as empirical risk minimization (ERM), least squares support vector machines (LS-SVMs) using given generic kernels, and SVMs using gaussian kernels for both least squares and quantile regression. It turns out that for independent and identically distributed (i.i.d.) processes, our learning rates for ERM recover the optimal rates. For non-i.i.d. processes, including geometrically [Formula: see text]-mixing Markov processes, geometrically [Formula: see text]-mixing processes with restricted decay, [Formula: see text]-mixing processes, and (time-reversed) geometrically [Formula: see text]-mixing processes, our learning rates for SVMs with gaussian kernels match, up to some arbitrarily small extra term in the exponent, the optimal rates. For the remaining cases, our rates are at least close to the optimal rates. As a by-product, the assumed generalized Bernstein-type inequality also provides an interpretation of the so-called effective number of observations for various mixing processes.

The principal landmarks in the development of general relativity (exclusive of cosmology) during the first 30 years after its founding are presented. The emergence of the new gravitational laws, their experimental consequences and the consequent growth of the present concern with gravitational collapse and black holes are traced. (U.K.)

The main concepts and capabilities of the GRG specialized computer agebra system intended for performing calculations in the gravitation theory are described. The GRG system is written in the STANDARD LISP language. The program consists of two parts: the first one - for setting initial data, the second one - for specifying a consequence of calculations. The system can function in three formalisms: a coordinate, a tetradic with the Lorentz basis and a spinor ones. The major capabilities of the GRG system are the following: calculation of connectivity and curvature according to the specified metrics, tetrad and torsion; metric type determination according to Petrov; calculation of the Bianchi indentities; operation with an electromagnetic field; tetradic rotations; coordinate conversions

The purpose of this study was to identify nurses managerial conduct in a private maternity hospital located in the interior of the São Paulo state, Brazil. In order to collect data, authors used a questionnaire with 20 propositions related to the work of nurses in the different units of the hospital. Following, authors performed a descriptive statistical analysis of the data. Results showed a tendency toward democratization in the conduct of the nurses investigated as the majority of the responses privileged questions on team work, workers' participation and group development. Also, authors evidenced that great part of the responses did not agree with the propositions about the principles of the Classical Administration Theory.

The A set of observable of a physical system with finite e infinite number of degrees of freedom and submitted to certain constraint conditions, is considered. Using jordan algebra structure on A in relation to bymmetric Poisson bracket obtained by Droz-Vincent, a jordan product is obtained on the A/I quocient set with regard to I ideal generated by constraints of second class. It is shown that this product on A/I corresponds to symmetric Dirac bracket. The developed formulation is applied to a system corresponding to harmonic oscillators, non relativistic field, Rarita-Schwinger field and the possibility of its utilization in fermionic string theories is discussed. (M.C.K.)

Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. However, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We avoid cumbersome machine-generated code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.

Optimizing maintenance programs for nuclear power plants is a difficult task. Beyond the reliability of the systems at hand, one has to consider several conflicting objectives such as safety, availability, maintenance costs, personal exposure to radiations, all under risk. Multi-Attributed Utility Theory is a widely used framework to cope with such problems. This procedure is, however, based on a set of axioms which imply an expected utility treatment of risk. It has been shown elsewhere that the risk structure to be considered in such cases does not correspond to behavior consistent with such a treatment of risk, but would rather correspond to a rank dependent evaluation type of model. The question raised is then how to use a multi-attributed scheme of preferences under such conditions. (author)

We introduce a versatile bottom-up derivation of a formal theoretical framework to describe (passive) soft-matter systems out of equilibrium subject to fluctuations. We provide a unique connection between the constituent-particle dynamics of real systems and the time evolution equation of their measurable (coarse-grained) quantities, such as local density and velocity. The starting point is the full Hamiltonian description of a system of colloidal particles immersed in a fluid of identical bath particles. Then, we average out the bath via Zwanzig’s projection-operator techniques and obtain the stochastic Langevin equations governing the colloidal-particle dynamics. Introducing the appropriate definition of the local number and momentum density fields yields a generalisation of the Dean-Kawasaki (DK) model, which resembles the stochastic Navier-Stokes description of a fluid. Nevertheless, the DK equation still contains all the microscopic information and, for that reason, does not represent the dynamical law of observable quantities. We address this controversial feature of the DK description by carrying out a nonequilibrium ensemble average. Adopting a natural decomposition into local-equilibrium and nonequilibrium contribution, where the former is related to a generalised version of the canonical distribution, we finally obtain the fluctuating-hydrodynamic equation governing the time-evolution of the mesoscopic density and momentum fields. Along the way, we outline the connection between the ad hoc energy functional introduced in previous DK derivations and the free-energy functional from classical density-functional theory. The resultant equation has the structure of a dynamical density-functional theory (DDFT) with an additional fluctuating force coming from the random interactions with the bath. We show that our fluctuating DDFT formalism corresponds to a particular version of the fluctuating Navier-Stokes equations, originally derived by Landau and Lifshitz

The inverse problem is solved in general relativity theory (GRT) consisting in determining the metric and potentials of an electromagnetic field by their values in the nonsingular point of the V 4 space and present functions, being the generalized momenta of a test charged particle. The Hamilton-Jacobi equation for a test charged particle in GRT is used. The general form of the generalized momentum dependence on the initial values is determined. It is noted that the inverse problem solution of dynamics in GRT contains arbitrariness which depends on the choice of the metric and potential values of the electromagnetic field in the nonsingular point [ru

Evolutionary graph theory (EGT) is recently proposed by Lieberman et al. in 2005. EGT is successful for explaining biological evolution and some social phenomena. It is extremely important to consider the time of fixation for EGT in many practical problems, including evolutionarytheory and the evolution of cooperation. This study characterizes the time to asymptotically reach fixation.

The lower continental crust, formerly very poorly understood, has recently been investigated by various geological and geophysical techniques that are beginning to yield a generally agreed on though still vague model (Lowman, 1984). As typified by at least some exposed high grade terranes, such as the Scottish Scourian complex, the lower crust in areas not affected by Phanerozoic orogeny or crustal extension appears to consist of gently dipping granulite gneisses of intermediate bulk composition, formed from partly or largely supracrustal precursors. This model, to the degree that it is correct, has important implications for early crustal genesis and the origin of continental crust in general. Most important, it implies that except for areas of major overthrusting (which may of course be considerable) normal superposition relations prevail, and that since even the oldest exposed rocks are underlain by tens of kilometers of sial, true primordial crust may still survive in the lower crustal levels (of. Phinney, 1981).

This is the report of the talk given at the conference 'Number, Time and Relativity', held at the Bauman University, Moscow, August 2004, concerning the recent research activity of the author and his collaborators about the inter-relation of the concepts of division algebras, representations of Clifford algebras, generalized supersymmetries with the introduction of an alternative description of the M-algebra in terms of the non-associative structure of the octonions. (author)

On a hypersphere of a prescribed radius the so-called genealogical basis has been constructed. By making use of this basis, the many-body Schroedinger equation has been obtained for bound states of various physical systems. The genealogical series method, being in general outline the extension of the angular potential functions method, deals with the potential harmonics of any generation needed. The new approach provides an exact numerical description of the hadron systems with two-body higher interaction

In this work we determine theoretically the critical charge density in the system grounded metallic sphere, uniformly charged dielectric plane, in the presence of grounded surfaces, in a more general case. Special attention is paid to the influence of the system geometry in determining the most optimal conditions for obtaining the minimum critical charge density. This is a situation frequently encountered in industrial condition and is important in evaluating the danger of the electrostatic discharges. (author)

Full text: (author)In the construction of physical theories are several paradigms (according to Vladimirov Yu. S.). Depending on the number of entities are used paradigms include trialist (3 entities), dualist (2 entities) and monistic (1 entity). In trialist paradigm uses the following entities: geometry (G), particle (P) and field (F). Go to the dualist paradigms performed in the following ways: two entities take over the functions of the third, two entities merged into a single synthesis. Is also possible to limit the dualistic theory, which summarized the essence in addition assume the functions of a third. In turn, by way of grouping the entities dualistic theory can be divided into geometric (unification of geometry and field), relational (unification of geometry and particles) and field (unification of fields and particles). For the connection of the two theories should be to go to the common denominator: to trialist or monistic theories. Since the monistic theory at the moment completely unknown, may be used only trialist theory. General relativity is a typical representative of the geometric dualistic paradigm. However geometrized only gravity. Other fields non-geometrized. In turn, the relativistic theory of gravitation is a typical trialist theory. To establish a correspondence between theories should to geometrize material field in the generaltheory of relativity. It is proposed to implement this on the basis of a multi-dimensional Kaluza-Klein theory

Generalized correlations of the Schroedinger indefinitenesses are shown to have the meaning of the fundamental restrictions as to characteristics of space of states in any probability-like theory. Quantum mechanics, as well as, theory of the brownian movement at arbitrary space of time fall in the category of the mentioned theories. One compared correlations of coordinates-pulse indefinitenesses within the mentioned theory with the similar correlation of indefinitenesses for microparticle under the Gaussian wave packet state. One determined that in case of profound distinction in mathematical tools of two theories one observes their conceptual resemblance. It manifests itself under the alternative conditions - short times in one theory correspond to long ones in another theory and vice versa, while in any of the mentioned theories uncontrollable effect of either quantum or thermal type is of crucial importance [ru

It is shown that (1) the proper framework for testing Rastall's theory and its generalisations is in the case of non-negligible (i.e. discernible) gravitational effects such as gravity gradients; 2) these theories have conserved integral four-momentum and angular momentum; and (3) the Nordtvedt effect then provides limits on the parameters which arise as the result of the non-zero divergence of the energy-momentum tensor. (author)

We discuss the dynamics of a susceptible-infected-susceptible (SIS) model with local awareness in networks. Individual awareness to the infectious disease is characterized by a general function of epidemic information in its neighborhood. We build a high-accuracy approximate equation governing the spreading dynamics and derive an approximate epidemic threshold above which the epidemic spreads over the whole network. Our results extend the previous work and show that the epidemic threshold is dependent on the awareness function in terms of one infectious neighbor. Interestingly, when a pow-law awareness function is chosen, the epidemic threshold can emerge in infinite networks.

Schiff (1960) proposed a new test of general relativity based on measuring the precessions of the spin axes of gyroscopes in earth orbit. Since 1963 a Stanford research team has been developing an experiment to measure the two effects calculated by Schiff. The gyroscope consists of a uniform sphere of fused quartz 38 mm in diameter, coated with superconductor, electrically suspended and spinning at about 170 Hz in vacuum. The paper describes the proposed flight apparatus and the current state of development of the gyroscope, including techniques for manufacturing and measuring the gyro rotor and housing, generating ultralow magnetic fields, and mechanizing the readout.

The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

We study a higher derivative (HD) field theory with an arbitrary order of derivative for a real scalar field. The degree of freedom for the HD field can be converted to multiple fields with canonical kinetic terms up to the overall sign. The Lagrangian describing the dynamics of the multiple fields is known as the Lee-Wick (LW) form. The first step to obtain the LW form for a given HD Lagrangian is to find an auxiliary field (AF) Lagrangian which is equivalent to the original HD Lagrangian up to the quantum level. Until now, the AF Lagrangian has been studied only for N=2 and 3 cases, where N is the number of poles of the two-point function of the HD scalar field. We construct the AF Lagrangian for arbitrary N. By the linear combinations of AF fields, we also obtain the corresponding LW form. We find the explicit mapping matrices among the HD fields, the AF fields, and the LW fields. As an exercise of our construction, we calculate the relations among parameters and mapping matrices for N=2, 3, and 4 cases.

A new approach to computing the complex spectrum of magnetohydrodynamic waves and instabilities of moving plasmas is presented. It is based on the concept of the Spectral Web, exploiting the self-adjointness of the generalized Frieman-Rotenberg force operator, G, and the Doppler-Coriolis gradient operator parallel to the velocity, U. The problem is solved with an open boundary, where the complementary energy Wcom represents the amount of energy to be delivered to or extracted from the system to maintain a harmonic time-dependence. The eigenvalues are connected by a system of curves in the complex ω-plane, the solution path and the conjugate path (where Wcom is real or imaginary) which together constitute the Spectral Web, having a characteristic geometry that has to be clarified yet, but that has a deep physical significance. It is obtained by straightforward contour plotting of the two paths. The complex eigenvalues, within a specified rectangle of the complex ω-plane, are found by fast, reliable, and accurate iterations. Real and complex oscillation theorems, replacing the familiar tool of counting nodes of eigenfunctions, provide an associated mechanism of mode tracking along the two paths. The Spectral Web method is generalized to toroidal systems and extended to include a resistive wall by accounting for the dissipation in such a wall. It is applied in an accompanying Paper II [J. P. Goedbloed, Phys. Plasmas 25, 032110 (2018).] to a multitude of the basic fundamental instabilities operating in cylindrical plasmas.

Full Text Available Purpose: to reveal modern ideas about the essence of the concept of "sport" and determine its role in the development of the generaltheory of physical culture and sports theory. Material & Methods: analysis of specialized literature, which highlights various aspects of the development of the field of people's activities related to the use of physical exercises. Results: in today's society there is an objective sphere of human activity related to the use of physical exercises, for which the name in domestic and foreign scientific and social practice is most often used the term "physical culture". Conclusion: the constitutive conditions of the process of developing a generaltheory of physical culture are singled out, it is shown that sport, as a special socio-cultural phenomenon, is a historically conditioned activity of people associated with the use of physical exercises, aimed at preparing and participating in competitions, as well as individual and socially significant results of such activity.

Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

Traditional attachment theory posits that attachment in infancy and early childhood is the result of intergenerational transmission of attachment from parents to offspring. Verhage et al. (2016) present meta-analytic evidence addressing the intergenerational transmission of attachment between caregivers and young children. In this commentary, we argue that their appraisal of the behavioral genetics literature is incomplete. The suggested research focus on shared environmental effects may dissuade the pursuit of profitable avenues of research and may hinder progress in attachment theory. Specifically, further research on the "transmission gap" will continue to limit our understanding of attachment etiology. We discuss recent theoretical developments from an evolutionary psychological perspective that can provide a valuable framework to account for the existing behavioral genetic data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

An outstanding issue in the computational analysis of time dependent problems is the imposition of appropriate radiation boundary conditions at artificial boundaries. Accurate conditions are developed which are based on the asymptotic analysis of wave propagation over long ranges. Employing the method of steepest descents, dominant wave groups are identified and simple approximations to the dispersion relation are considered in order to derive local boundary operators. The existence of a small number of dominant wave groups may be expected for systems with dissipation. Estimates of the error as a function of domain size are derived under general hypotheses, leading to convergence results. Some practical aspects of the numerical construction of the asymptotic boundary operators are also discussed.

An outstanding issue in computational analysis of time dependent problems is the imposition of appropriate radiation boundary conditions at artificial boundaries. Accurate conditions are developed which are based on the asymptotic analysis of wave propagation over long ranges. Employing the method of steepest descents, dominant wave groups are identified and simple approximations to the dispersion relation are considered in order to derive local boundary operators. The existence of a small number of dominant wave groups may be expected for systems with dissipation. Estimates of the error as a function of domain size are derived under general hypotheses, leading to convergence results. Some practical aspects of the numerical construction of the asymptotic boundary operators are also discussed.

According to the big bang theory, the universe began about 15 billion years ago and has been continually expanding ever since. If certain elementary physical concepts are naively applied to this cosmological theory, it can lead to a paradox in which distant astronomical objects seem to have lain at distances from the Earth larger than the possible size of the universe. The paradox is resolved by using concepts from general relativity theory. These concepts may appear startling to some readers

Full Text Available Neural networks and fuzzy systems are two soft-computing paradigms for system modelling. Adapting a neural or fuzzy system requires to solve two optimization problems: structural optimization and parametric optimization. Structural optimization is a discrete optimization problem which is very hard to solve using conventional optimization techniques. Parametric optimization can be solved using conventional optimization techniques, but the solution may be easily trapped at a bad local optimum. Evolutionary computation is a general-purpose stochastic global optimization approach under the universally accepted neo-Darwinian paradigm, which is a combination of the classical Darwinian evolutionarytheory, the selectionism of Weismann, and the genetics of Mendel. Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies. Other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are also described. Some topics pertaining to evolutionary algorithms are also discussed, and a comparison between evolutionary algorithms and simulated annealing is made. Finally, the application of EAs to the learning of neural networks as well as to the structural and parametric adaptations of fuzzy systems is also detailed.

Reasons of non-uniqueness of predictions of the general relativity theory (GRT) for gravitational effects are analyzed in detail. To authors' opinion, the absence of comparison mechanism of curved and plane metrics is the reason of non-uniqueness

We consider a new algebraic approach in the geometry of supergauge theories and supergravity. An introduction of nilpotent algebras simplifies significantly the analysis of D = 3, 4, N = 1 supergravity constraints. Different terms in the invariant action functionals of SG- and SYM-theories are constructed as the integrals of corresponding generalized differential forms. (orig.)

A general asymptotic theory of estimates from estimating functions based on jack-knife pseudo-observations is established by requiring that the underlying estimator can be expressed as a smooth functional of the empirical distribution. Using results in p-variation norms, the theory is applied...

Suggests using general system theory as a unifying theoretical framework for science and technology education for all. Five reasons are articulated: the multidisciplinary nature of systems theory, the ability to engage complexity, the capacity to describe system dynamics, the ability to represent the relationship between microlevel and…

Three principles of general systems theory are presented and systems theory is distinguished from systems analysis. The principles state that all systems tend to become more disorderly, that they must be diverse in order to be stable, and that only those maximizing their resource utilization for doing useful work will survive. (Author/LBH)

In this chapter, the author describes how a pedagogical approach utilizing insights and principles from queer theory facilitated an intersectional analysis in a large lecture, general education course on "Gender, Sexuality, Literature and Culture" at the University of Massachusetts Amherst. Her goal in using queer theory's deconstructive…

In 2005, Agnew introduced a new integrated theory, which he labels a generaltheory of crime and delinquency. He proposes that delinquency is more likely to occur when constraints against delinquency are low and motivations for delinquency are high. In addition, he argues that constraints and motivations are influenced by variables in five life…

In this paper we derive 4-dimensional General Relativity from three dimensions, using the intrinsic spatial geometry inherent in Yang-Mills theory which has been exposed by previous authors as well as some properties of the Ashtekar variables. We provide various interesting relations, including the fact that General Relativity can be written as a Yang-Mills theory where the antiself-dual Weyl curvature replaces the Yang-Mills coupling constant. We have generalized the results of some previous authors, covering Einstein's spaces, to include more general spacetime geometries.

On the 20th June 1933 Professor Einstein addressed a large and enthusiastic audience in the Victorian Gothic Bute Hall of the University of Glasgow. Einstein spoke 'About the Origins of the GeneralTheory of Relativity'. In 1905 Einstein had changed the face of physics forever with the publication of his radical new ideas on special relativity. His generaltheory of relativity was introduced to the world in 1915. However in 1933, Einstein faced another challenge—survival in a world of change....

OAK B202 THE GENERAL ATOMICS FUSION THEORY PROGRAM ANNUAL REPORT FOR FISCAL YEAR 2002. The dual objective of the fusion theory program at General Atomics (GA) is to significantly advance the scientific understanding of the physics of fusion plasmas and to support the DIII-D and other tokamak experiments. The program plan is aimed at contributing significantly to the Fusion Energy Science and the Tokamak Concept Improvement goals of the Office of Fusion Energy Sciences (OFES)

A method of obtaining approximate solutions of the transport equation is presented in a form applicable in principle to any geometry. The approximation will give good results in cases where the angular distribution is not very anisotropic. The basis of the approximation is to expand the density per unit solid angle Ψ(→/r, →/Ω) in spherical harmonic tensors formed from →/Ω the unit vector in the direction of velocity, and to break off the expansion. A differential equation whose degree increases with the order of the approximation is obtained for the total density Ψ (o) (r). This equation has the form where the numbers ν i depend on the order of the approximation and on the value of the parameter a of the medium, but not at all on the geometry. When the equation for the total density is an ordinary equation, we simulate the physical condition of continuity of Ψ(→/r, →/Ω) at a boundary in a multi-medium problem by requiring that the spherical harmonic moments of Ψ(→/r, →/Ω) which we retain be continuous; and this determines the constants in the solution for Ψ (o) (→/r. The form of the solution for the total density and the necessary moments in an approximation of general order is given explicitly for plane and spherical symmetry; and for cylindrical symmetry the solution is given for two low-order approximations. In a later report (CRT-338, Revised) the application of the method to several problems involving plane and spherical symmetry will be discussed in detail and the results of a number of examples already worked will also be given. (author)

A method of obtaining approximate solutions of the transport equation is presented in a form applicable in principle to any geometry. The approximation will give good results in cases where the angular distribution is not very anisotropic. The basis of the approximation is to expand the density per unit solid angle {Psi}({yields}/r, {yields}/{Omega}) in spherical harmonic tensors formed from {yields}/{Omega} the unit vector in the direction of velocity, and to break off the expansion. A differential equation whose degree increases with the order of the approximation is obtained for the total density {Psi}{sup (o)}(r). This equation has the form where the numbers {nu}{sub i} depend on the order of the approximation and on the value of the parameter a of the medium, but not at all on the geometry. When the equation for the total density is an ordinary equation, we simulate the physical condition of continuity of {Psi}({yields}/r, {yields}/{Omega}) at a boundary in a multi-medium problem by requiring that the spherical harmonic moments of {Psi}({yields}/r, {yields}/{Omega}) which we retain be continuous; and this determines the constants in the solution for {Psi}{sup (o)}({yields}/r. The form of the solution for the total density and the necessary moments in an approximation of general order is given explicitly for plane and spherical symmetry; and for cylindrical symmetry the solution is given for two low-order approximations. In a later report (CRT-338, Revised) the application of the method to several problems involving plane and spherical symmetry will be discussed in detail and the results of a number of examples already worked will also be given. (author)

The general rotorcraft aeromechanical stability program (GRASP) was developed to calculate aeroelastic stability for rotorcraft in hovering flight, vertical flight, and ground contact conditions. GRASP is described in terms of its capabilities and its philosophy of modeling. The equations of motion that govern the physical system are described, as well as the analytical approximations used to derive them. The equations include the kinematical equation, the element equations, and the constraint equations. In addition, the solution procedures used by GRASP are described. GRASP is capable of treating the nonlinear static and linearized dynamic behavior of structures represented by arbitrary collections of rigid-body and beam elements. These elements may be connected in an arbitrary fashion, and are permitted to have large relative motions. The main limitation of this analysis is that periodic coefficient effects are not treated, restricting rotorcraft flight conditions to hover, axial flight, and ground contact. Instead of following the methods employed in other rotorcraft programs. GRASP is designed to be a hybrid of the finite-element method and the multibody methods used in spacecraft analysis. GRASP differs from traditional finite-element programs by allowing multiple levels of substructure in which the substructures can move and/or rotate relative to others with no small-angle approximations. This capability facilitates the modeling of rotorcraft structures, including the rotating/nonrotating interface and the details of the blade/root kinematics for various types. GRASP differs from traditional multibody programs by considering aeroelastic effects, including inflow dynamics (simple unsteady aerodynamics) and nonlinear aerodynamic coefficients.

Evolutionary game theory has been viewed as an evolutionary repair of rational actor game theory in the hope that a population of boundedly rational players may attain convergence to classic rational solutions, such as the Nash Equilibrium, via some learning or evolutionary process. In this thesis

For a class of first order gauge theories it was shown that the proper solution of the BV-master equation can be obtained straightforwardly. Here we present the general condition which the gauge generators should satisfy to conclude that this construction is relevant. The general procedure is illustrated by its application to the Chern-Simons theory in any odd-dimension. Moreover, it is shown that this formalism is also applicable to BRST field theories, when one replaces the role of the exterior derivative with the BRST charge of first quantization. (author). 17 refs

Full Text Available In this paper, we have investigated an anisotropic homogeneous plane symmetric cosmological micro-model in the presence of massless scalar field in modified theory of Einstein's general relativity. Some interesting physical and geometrical aspects of the model together with singularity in the model are discussed. Further, it is shown that this theory is valid and leads to Ein­stein's theory as the coupling parameter λ →>• 0 in micro (i.e. quantum level in general.

The identification of a suitable gravitational energy in theories of gravity has a long history, and it is well known that a unique answer cannot be given. In the first part of this paper we present a streamlined version of the derivation of Freud's superpotential in general relativity. It is found if we once integrate the gravitational field equation by parts. This allows us to extend these results directly to the Einstein-Cartan theory. Interestingly, Freud's original expression, first stated in 1939, remains valid even when considering gravitational theories in Riemann-Cartan or, more generally, in metric-affine spacetimes.

The analogy between electrodynamics and the translational gauge theory of gravity is employed in this paper to develop an ansatz for a nonlocal generalization of Einstein's theory of gravitation. Working in the linear approximation, we show that the resulting nonlocal theory is equivalent to general relativity with 'dark matter'. The nature of the predicted dark matter, which is the manifestation of the nonlocal character of gravity in our model, is briefly discussed. It is demonstrated that this approach can provide a basis for the Tohline-Kuhn treatment of the astrophysical evidence for dark matter.

The theory of gauge gravitational field with the de Sitter group localization is formulated. proceeding from the de Sitter Universe tetrad components the relationship between Riemann metrics and de Sitter gauge field is established. It is shown that General relativity theory (GRT) with a cosmological term is the simplest variant of the de Sitter gauge gravitation theory passing in the limit of infinite curvature radius of the de Sitter Universe into the Poincare - invariant GRT without cosmological term. Similarly the theory of gauge gravitational field at localization of the dynamical group of the Einstein homogeneous static Universe (Einstein group RxSO(4)) is formulated

The study of evolutionary dynamics was so far mainly restricted to finite strategy spaces. In this paper we show that this unsatisfying restriction is unnecessary. We specify a simple condition under which the continuous time replicator dynamics are well defined for the case of infinite strategy spaces. Furthermore, we provide new conditions for the stability of rest points and show that even strict equilibria may be unstable. Finally, we apply this generaltheory to a number of applications ...

An evolutionary perspective on attachment theory and psychoanalytic theory brings these two fields together in interesting ways. Application of the evolutionary principle of parent-offspring conflict to attachment theory suggests that attachment styles represent context-sensitive, evolved (adaptive) behaviors. In addition, an emphasis on offspring counter-strategies to adult reproductive strategies leads to consideration of attachment styles as overt manifestations of psychodynamic mediating processes, including the defense mechanisms of repression and reaction formation.

Generalising Newton's law of gravitation, general relativity is one of the pillars of modern physics. On the occasion of general relativity's centennial, leading scientists in the different branches of gravitational research review the history and recent advances in the main fields of applications of the theory, which was referred to by Lev Landau as “the most beautiful of the existing physical theories”.

We present a general and systematic theory of non-equilibrium dynamics of multi-component fluid membranes, in general, and membranes containing transmembrane proteins, in particular. Developed based on a minimal number of principles of statistical physics and designed to be a meso...

The objectives of this study were to test the predictive power of self-control theory for delinquency in a Chinese context, and to explore if social factors as predicted in social bonding theory, differential association theory, general strain theory, and labeling theory have effects on delinquency in the presence of self-control. Self-report data…

In a recent paper [1], it was introduced a new class of gravitational theories with two local degrees of freedom. The existence of these theories apparently challenges the distinctive role of general relativity as the unique non-linear theory of massless spin-2 particles. Here we perform a comprehensive analysis of these theories with the aim of (i) understanding whether or not these are actually equivalent to general relativity, and (ii) finding the root of the variance in case these are not. We have found that a broad set of seemingly different theories actually pass all the possible tests of equivalence to general relativity (in vacuum) that we were able to devise, including the analysis of scattering amplitudes using on-shell techniques. These results are complemented with the observation that the only examples which are manifestly not equivalent to general relativity either do not contain gravitons in their spectrum, or are not guaranteed to include only two local degrees of freedom once radiative corrections are taken into account. Coupling to matter is also considered: we show that coupling these theories to matter in a consistent way is not as straightforward as one could expect. Minimal coupling, as well as the most straightforward non-minimal couplings, cannot be used. Therefore, before being able to address any issues in the presence of matter, it would be necessary to find a consistent (and in any case rather peculiar) coupling scheme.