The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains; in addition to their inherent multifaceted functionality, these Web applications exhibit complex behaviour and place some unique demands on their usability, performance, security, and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability.[1][2] While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations; in recent years, there have been developments towards addressing these considerations.

Web engineering focuses on the methodologies, techniques, and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.

Proponents of Web engineering supported the establishment of Web engineering as a discipline at an early stage of Web. Major arguments for Web engineering as a new discipline are:

Web-based Information Systems (WIS) development process is different and unique.[3]

Web engineering is multi-disciplinary; no single discipline (such as software engineering) can provide complete theory basis, body of knowledge and practices to guide WIS development.[4]

Issues of evolution and lifecycle management when compared to more 'traditional' applications.

Web-based information systems and applications are pervasive and non-trivial. The prospect of Web as a platform will continue to grow and it is worth being treated specifically.

However, it has been controversial, especially for people in other traditional disciplines such as software engineering, to recognize Web engineering as a new field, the issue is how different and independent Web engineering is, compared with other disciplines.

Main topics of Web engineering include, but are not limited to, the following areas:

1.
Science
–
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. The formal sciences are often excluded as they do not depend on empirical observations, disciplines which use science, like engineering and medicine, may also be considered to be applied sciences. However, during the Islamic Golden Age foundations for the method were laid by Ibn al-Haytham in his Book of Optics. In the 17th and 18th centuries, scientists increasingly sought to formulate knowledge in terms of physical laws, over the course of the 19th century, the word science became increasingly associated with the scientific method itself as a disciplined way to study the natural world. It was during this time that scientific disciplines such as biology, chemistry, Science in a broad sense existed before the modern era and in many historical civilizations. Modern science is distinct in its approach and successful in its results, Science in its original sense was a word for a type of knowledge rather than a specialized word for the pursuit of such knowledge. In particular, it was the type of knowledge which people can communicate to each other, for example, knowledge about the working of natural things was gathered long before recorded history and led to the development of complex abstract thought. This is shown by the construction of calendars, techniques for making poisonous plants edible. For this reason, it is claimed these men were the first philosophers in the strict sense and they were mainly speculators or theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature was seen by scientists as a more appropriate interest for lower class artisans. A clear-cut distinction between formal and empirical science was made by the pre-Socratic philosopher Parmenides, although his work Peri Physeos is a poem, it may be viewed as an epistemological essay on method in natural science. Parmenides ἐὸν may refer to a system or calculus which can describe nature more precisely than natural languages. Physis may be identical to ἐὸν and he criticized the older type of study of physics as too purely speculative and lacking in self-criticism. He was particularly concerned that some of the early physicists treated nature as if it could be assumed that it had no intelligent order, explaining things merely in terms of motion and matter. The study of things had been the realm of mythology and tradition, however. Aristotle later created a less controversial systematic programme of Socratic philosophy which was teleological and he rejected many of the conclusions of earlier scientists. For example, in his physics, the sun goes around the earth, each thing has a formal cause and final cause and a role in the rational cosmic order. Motion and change is described as the actualization of potentials already in things, while the Socratics insisted that philosophy should be used to consider the practical question of the best way to live for a human being, they did not argue for any other types of applied science

2.
Formal logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times

3.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

4.
Mathematical statistics
–
Mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. Statistical science is concerned with the planning of studies, especially with the design of randomized experiments, the initial analysis of the data from properly randomized studies often follows the study protocol. Of course, the data from a study can be analyzed to consider secondary hypotheses or to suggest new ideas. A secondary analysis of the data from a planned study uses tools from data analysis, data analysis is divided into, descriptive statistics - the part of statistics that describes data, i. e. summarises the data and their typical properties. Mathematical statistics has been inspired by and has extended many options in applied statistics, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution can either be univariate or multivariate, important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution. g, inferential statistics are used to test hypotheses and make estimations using sample data. Whereas descriptive statistics describe a sample, inferential statistics infer predictions about a population that the sample represents. The outcome of statistical inference may be an answer to the question what should be done next, where this might be a decision about making further experiments or surveys, or about drawing a conclusion before implementing some organizational or governmental policy. For the most part, statistical inference makes propositions about populations, more generally, data about a random process is obtained from its observed behavior during a finite period of time. e. In statistics, regression analysis is a process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. Less commonly, the focus is on a quantile, or other parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the function which can be described by a probability distribution. Many techniques for carrying out regression analysis have been developed, nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional. Nonparametric statistics are not based on parameterized families of probability distributions. They include both descriptive and inferential statistics, the typical parameters are the mean, variance, etc

5.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage

6.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern

7.
Decision theory
–
Decision theory is the study of the reasoning underlying an agents choices. Decision theory is a topic, studied by economists, statisticians, psychologists, political and social scientists. Empirical applications of this theory are usually done with the help of statistical and econometric methods, especially via the so-called choice models. Estimation of such models is usually done via parametric, semi-parametric and non-parametric maximum likelihood methods, the practical application of this prescriptive approach is called decision analysis, and is aimed at finding tools, methodologies and software to help people make better decisions. In contrast, positive or descriptive decision theory is concerned with describing observed behaviors under the assumption that the agents are behaving under some consistent rules. The prescriptions or predictions about behaviour that positive decision theory produces allow for further tests of the kind of decision-making that occurs in practice, there is a thriving dialogue with experimental economics, which uses laboratory and field experiments to evaluate and inform theory. The area of choice under uncertainty represents the heart of decision theory and he gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St Petersburg in winter. In his solution, he defines a utility function and computes expected utility rather than expected financial value, the phrase decision theory itself was used in 1950 by E. L. Lehmann. At this time, von Neumann and Morgenstern theory of expected utility proved that expected utility maximization followed from basic postulates about rational behavior, the work of Maurice Allais and Daniel Ellsberg showed that human behavior has systematic and sometimes important departures from expected-utility maximization. The prospect theory of Daniel Kahneman and Amos Tversky renewed the study of economic behavior with less emphasis on rationality presuppositions. Pascals Wager is an example of a choice under uncertainty. Intertemporal choice is concerned with the kind of choice where different actions lead to outcomes that are realised at different points in time, what is the optimal thing to do. The answer depends partly on factors such as the rates of interest and inflation, the persons life expectancy. Some decisions are difficult because of the need to take into account how people in the situation will respond to the decision that is taken. The analysis of such decisions is more often treated under the label of game theory, rather than decision theory. From the standpoint of game theory most of the treated in decision theory are one-player games. Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, one example is the model of economic growth and resource usage developed by the Club of Rome to help politicians make real-life decisions in complex situations. Decisions are also affected by whether options are framed together or separately, one example of common and incorrect thought process is the gamblers fallacy, or believing that a random event is affected by previous random events

8.
Information theory
–
Information theory studies the quantification, storage, and communication of information. A key measure in information theory is entropy, entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a coin flip provides less information than specifying the outcome from a roll of a die. Some other important measures in information theory are mutual information, channel capacity, error exponents, applications of fundamental topics of information theory include lossless data compression, lossy data compression, and channel coding. The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, Information theory studies the transmission, processing, utilization, and extraction of information. Abstractly, information can be thought of as the resolution of uncertainty, Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory. These codes can be subdivided into data compression and error-correction techniques. In the latter case, it took years to find the methods Shannons work proved were possible. A third class of information theory codes are cryptographic algorithms, concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article ban for a historical application, Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, the unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann, Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the associated with random variables. Important quantities of information are entropy, a measure of information in a random variable, and mutual information. The choice of base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm, other units include the nat, which is based on the natural logarithm, and the hartley, which is based on the common logarithm. In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p =0 and this is justified because lim p →0 + p log ⁡ p =0 for any logarithmic base

9.
Control theory
–
Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback. The usual objective of control theory is to control a system, often called the plant, so its output follows a control signal, called the reference. To do this a controller is designed, which monitors the output, the difference between actual and desired output, called the error signal, is applied as feedback to the input of the system, to bring the actual output closer to the reference. Some topics studied in control theory are stability, controllability and observability, extensive use is usually made of a diagrammatic style known as the block diagram. Although a major application of theory is in control systems engineering. As the general theory of systems, control theory is useful wherever feedback occurs. A few examples are in physiology, electronics, climate modeling, machine design, ecosystems, navigation, neural networks, predator–prey interaction, gene expression, Control systems may be thought of as having four functions, measure, compare, compute and correct. These four functions are completed by five elements, detector, transducer, transmitter, controller, the measuring function is completed by the detector, transducer and transmitter. In practical applications these three elements are contained in one unit. A standard example of a unit is a resistance thermometer. Older controller units have been mechanical, as in a governor or a carburetor. The correct function is completed with a control element. The final control element changes an input or output in the system that affects the manipulated or controlled variable. Fundamentally, there are two types of loops, open loop control and closed loop control. In open loop control, the action from the controller is independent of the process output. A good example of this is a central heating boiler controlled only by a timer, so heat is applied for a constant time. In closed loop control, the action from the controller is dependent on the process output. A closed loop controller therefore has a loop which ensures the controller exerts a control action to give a process output the same as the Reference input or set point

10.
Outline of physical science
–
Physical science is a branch of natural science that studies non-living systems, in contrast to life science. It in turn has many branches, each referred to as a physical science, in natural science, hypotheses must be verified scientifically to be regarded as scientific theory. Validity, accuracy, and social mechanisms ensuring quality control, such as review and repeatability of findings, are amongst the criteria. Natural science can be broken into two branches, life science, for example biology and physical science. Each of these branches, and all of their sub-branches, are referred to as natural sciences, physics – natural and physical science that involves the study of matter and its motion through space and time, along with related concepts such as energy and force. More broadly, it is the analysis of nature, conducted in order to understand how the universe behaves. Branches of astronomy Chemistry – studies the composition, structure, properties, branches of chemistry Earth science – all-embracing term referring to the fields of science dealing with planet Earth. Earth science is the study of how the natural environment works and it includes the study of the atmosphere, hydrosphere, lithosphere, and biosphere. Branches of Earth science History of physical science – history of the branch of science that studies non-living systems. It in turn has many branches, each referred to as a physical science, however, the term physical creates an unintended, somewhat arbitrary distinction, since many branches of physical science also study biological phenomena. History of astrodynamics – history of the application of ballistics and celestial mechanics to the problems concerning the motion of rockets. History of astrometry – history of the branch of astronomy that involves precise measurements of the positions and movements of stars, History of cosmology – history of the discipline that deals with the nature of the Universe as a whole. History of physical cosmology – history of the study of the largest-scale structures, History of planetary science – history of the scientific study of planets, moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of neurophysics – history of the branch of biophysics dealing with the nervous system, History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of algorithms to solve problems in physics for which a quantitative theory already exists. History of condensed matter physics – history of the study of the properties of condensed phases of matter. History of cryogenics – history of the cryogenics is the study of the production of low temperature. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, History of fluid mechanics – history of the study of fluids and the forces on them

11.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy

12.
Classical physics
–
Classical physics refers to theories of physics that predate modern, more complete, or more widely applicable theories. As such, the definition of a classical theory depends on context, classical physical concepts are often used when modern theories are unnecessarily complex for a particular situation. Classical theory has at least two meanings in physics. In the context of mechanics, classical theory refers to theories of physics that do not use the quantisation paradigm. Likewise, classical field theories, such as general relativity and classical electromagnetism, are those that do not use quantum mechanics, in the context of general and special relativity, classical theories are those that obey Galilean relativity. Modern physics includes quantum theory and relativity, when applicable, a physical system can be described by classical physics when it satisfies conditions such that the laws of classical physics are approximately valid. In practice, physical objects ranging from larger than atoms and molecules, to objects in the macroscopic and astronomical realm. Beginning at the level and lower, the laws of classical physics break down. Electromagnetic fields and forces can be described well by classical electrodynamics at length scales, unlike quantum physics, classical physics is generally characterized by the principle of complete determinism, although deterministic interpretations of quantum mechanics do exist. Mathematically, classical physics equations are those in which Plancks constant does not appear, according to the correspondence principle and Ehrenfests theorem, as a system becomes larger or more massive the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects, however, one of the most vigorous on-going fields of research in physics is classical-quantum correspondence. This field of research is concerned with the discovery of how the laws of physics give rise to classical physics found at the limit of the large scales of the classical level. Computer modeling is essential for quantum and relativistic physics, classic physics is considered the limit of quantum mechanics for large number of particles. On the other hand, classic mechanics is derived from relativistic mechanics, for example, in many formulations from special relativity, a correction factor 2 appears, where v is the velocity of the object and c is the speed of light. For velocities much smaller than that of light, one can neglect the terms with c2 and these formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities, computer modeling has to be as real as possible. Classical physics would introduce an error as in the superfluidity case, in order to produce reliable models of the world, we can not use classic physics. It is true that quantum theories consume time and computer resources, and the equations of physics could be resorted to provide a quick solution

13.
Modern physics
–
Modern physics is the post-Newtonian conception of physics. In general, the term is used to refer to any branch of physics either developed in the early 20th century and onwards, small velocities and large distances is usually the realm of classical physics. In general, quantum and relativist effects exist across all scales, in a literal sense, the term modern physics, means up-to-date physics. In this sense, a significant portion of so-called classical physics is modern, however, since roughly 1890, new discoveries have caused significant paradigm shifts, the advent of quantum mechanics and of Einsteinian relativity. Physics that incorporates elements of either QM or ER is said to be modern physics and it is in this latter sense that the term is generally used. Modern physics is often encountered when dealing with extreme conditions, quantum mechanical effects tend to appear when dealing with lows, while relativistic effects tend to appear when dealing with highs, the middles being classical behaviour. For example, when analysing the behaviour of a gas at room temperature, however near absolute zero, the Maxwell–Boltzmann distribution fails to account for the observed behaviour of the gas, and the Fermi–Dirac or Bose–Einstein distributions have to be used instead. Very often, it is possible to find – or retrieve – the classical behaviour from the description by analysing the modern description at low speeds. When doing so, the result is called the classical limit

14.
Applied physics
–
Applied physics is physics which is intended for a particular technological or practical use. It is usually considered as a bridge or a connection between physics and engineering and this approach is similar to that of applied mathematics. Applied physicists can also be interested in the use of physics for scientific research, for instance, the field of accelerator physics can contribute to research in theoretical physics by working with engineers enabling design and construction of high-energy colliders

15.
Theoretical physics
–
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory, in some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, a physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations, the quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a theory, in the sense that the word theory has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities, archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles, Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example, for instance, phenomenologists might employ empirical formulas to agree with experimental results, often without deep physical understanding. Modelers often appear much like phenomenologists, but try to model speculative theories that have certain desirable features, some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a system might be modeled, e. g. the notion, due to Riemann and others. Theoretical problems that need computational investigation are often the concern of computational physics, Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result, sometimes though, advances may proceed along different paths. However, an exception to all the above is the wave–particle duality, Physical theories become accepted if they are able to make correct predictions and no incorrect ones. They are also likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method, Physical theories can be grouped into three categories, mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, during the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon

16.
Experimental physics
–
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from experiments and observations, such as the Cavendish experiment, to more complicated ones. It is often put in contrast with theoretical physics, which is concerned with predicting and explaining the physical behaviour of nature than the acquisition of knowledge about it. Although experimental and theoretical physics are concerned with different aspects of nature, theoretical physics can also offer insight on what data is needed in order to gain a better understanding of the universe, and on what experiments to devise in order to obtain it. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newtons laws of motion. In Galileos Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship and how that ships cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate a form of the conservation of momentum. Experimental physics is considered to have reached a point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton. Both theories agreed well with experiment, the Principia also included several theories in fluid dynamics. From the late 17th century onward, thermodynamics was developed by physicist and chemist Boyle, Young, in 1733, Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Thompson demonstrated the conversion of work into heat. Ludwig Boltzmann, in the century, is responsible for the modern form of statistical mechanics. Besides classical mechanics and thermodynamics, another field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by such as Robert Boyle, Stephen Gray. These observations also established our basic understanding of electrical charge and current, by 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom. It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields, in 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwells equations also predicted correctly that light is an electromagnetic wave, starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries

17.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of computers in science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave, unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, in such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method, while computers can be used in experiments for the measurement and recording of data, this clearly does not constitute a computational approach. Physics problems are in very difficult to solve exactly. This is due to several reasons, lack of algebraic and/or analytic solubility, complexity, on the more advanced side, mathematical perturbation theory is also sometimes used. In addition, the computational cost and computational complexity for many-body problems tend to grow quickly, a macroscopic system typically has a size of the order of 1023 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system, because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Furthermore, computational physics encompasses the tuning of the structure to solve the problems. It is possible to find a corresponding computational branch for every field in physics, for example computational mechanics. Computational mechanics consists of fluid dynamics, computational solid mechanics. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics, the quantum many-body problem leads naturally to the large and rapidly growing field of computational chemistry. Computational solid state physics is an important division of computational physics dealing directly with material science. A field related to computational condensed matter is computational statistical mechanics, computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, it concerns itself with in the social sciences, network theory, and mathematical models for the propagation of disease. Computational astrophysics is the application of techniques and methods to astrophysical problems. Stickler, E. Schachinger, Basic concepts in computational physics, E. Winsberg, Science in the Age of Computer Simulation

18.
Mechanics
–
Mechanics is an area of science concerned with the behaviour of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The scientific discipline has its origins in Ancient Greece with the writings of Aristotle, during the early modern period, scientists such as Khayaam, Galileo, Kepler, and Newton, laid the foundation for what is now known as classical mechanics. It is a branch of physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of, historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newtons laws of motion in Philosophiæ Naturalis Principia Mathematica, both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences, essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of quantum numbers. Quantum mechanics has superseded classical mechanics at the level and is indispensable for the explanation and prediction of processes at the molecular, atomic. However, for macroscopic processes classical mechanics is able to solve problems which are difficult in quantum mechanics and hence remains useful. Modern descriptions of such behavior begin with a definition of such quantities as displacement, time, velocity, acceleration, mass. Until about 400 years ago, however, motion was explained from a different point of view. He showed that the speed of falling objects increases steadily during the time of their fall and this acceleration is the same for heavy objects as for light ones, provided air friction is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass, for objects traveling at speeds close to the speed of light, Newton’s laws were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles, Newton’s laws were superseded by quantum theory, for everyday phenomena, however, Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Einsteins general and special theories of relativity have expanded the scope of Newton, the differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a massive body approaches the speed of light. Relativistic corrections are also needed for quantum mechanics, although general relativity has not been integrated, the two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything

19.
Classical mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time

20.
Analytical mechanics
–
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related alternative formulations of classical mechanics. It was developed by scientists and mathematicians during the 18th century and onward. A scalar is a quantity, whereas a vector is represented by quantity, the equations of motion are derived from the scalar quantity by some underlying principle about the scalars variation. Analytical mechanics takes advantage of a systems constraints to solve problems, the constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates and it does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics or use the Udwadia–Kalaba equation. Two dominant branches of mechanics are Lagrangian mechanics and Hamiltonian mechanics. There are other such as Hamilton–Jacobi theory, Routhian mechanics. All equations of motion for particles and fields, in any formalism, one result is Noethers theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics, rather it is a collection of equivalent formalisms which have broad application. In fact the principles and formalisms can be used in relativistic mechanics and general relativity. Analytical mechanics is used widely, from physics to applied mathematics. The methods of analytical mechanics apply to particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom, the definitions and equations have a close analogy with those of mechanics. Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a position during its motion. In physical systems, however, some structure or other system usually constrains the motion from taking certain directions. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motions geometry and these are known as generalized coordinates, denoted qi. Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system, there is one generalized coordinate qi for each degree of freedom, i. e. each way the system can change its configuration, as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates, DAlemberts principle The foundation which the subject is built on is DAlemberts principle

21.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem

22.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, astrophysics. Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be complex, and can best be solved by numerical methods. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid mechanics problems, Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Inviscid flow was further analyzed by mathematicians and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille. Fluid statics or hydrostatics is the branch of mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium, and is contrasted with fluid dynamics, hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspect of geophysics and astrophysics, to meteorology, to medicine, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density and it has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in engineering and crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table, in a mechanical view, a fluid is a substance that does not support shear stress, that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress, the assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. This can be expressed as an equation in integral form over the control volume, the continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Fluid properties can vary continuously from one element to another and are average values of the molecular properties. The continuum hypothesis can lead to results in applications like supersonic speed flows. Those problems for which the continuum hypothesis fails, can be solved using statistical mechanics, to determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, the Navier–Stokes equations are differential equations that describe the force balance at a given point within a fluid

23.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, and mechanical engineering, for geology, and for many branches of physics such as materials science. It has specific applications in other areas, such as understanding the anatomy of living beings. One of the most common applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, as shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics, a material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain and this region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, however, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, There are four basic models that describe how a solid responds to an applied stress, Elastically – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load and this implies that the material response has time-dependence. Plastically – Materials that behave elastically generally do so when the stress is less than a yield value. When the stress is greater than the stress, the material behaves plastically. That is, deformation occurs after yield is permanent. Thermoelastically - There is coupling of mechanical with thermal responses, in general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fouriers law of conduction, as opposed to advanced theories with physically more realistic models. This theorem includes the method of least work as a special case 1874,1922, Timoshenko corrects the Euler-Bernoulli beam equation 1936, Hardy Cross publication of the moment distribution method, an important innovation in the design of continuous frames. Martin, and L. J. Applied mechanics Materials science Continuum mechanics Fracture mechanics L. D, landau, E. M. Lifshitz, Course of Theoretical Physics, Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J. E. Marsden, T. J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P. C. Chou, N. J. Pagano, Elasticity, Tensor, Dyadic, goodier, Theory of elasticity, 3d ed

24.
Electromagnetism
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as fields, magnetic fields. The other three fundamental interactions are the interaction, the weak interaction, and gravitation. The word electromagnetism is a form of two Greek terms, ἤλεκτρον, ēlektron, amber, and μαγνῆτις λίθος magnētis lithos, which means magnesian stone. The electromagnetic force plays a role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of forces between individual atoms and molecules in matter, and is a manifestation of the electromagnetic force. Electrons are bound by the force to atomic nuclei, and their orbital shapes. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, there are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential, although electromagnetism is considered one of the four fundamental forces, at high energy the weak force and electromagnetic force are unified as a single electroweak force. In the history of the universe, during the epoch the unified force broke into the two separate forces as the universe cooled. Originally, electricity and magnetism were considered to be two separate forces, Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs, every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a field, or a magnet is moved towards or away from it. While preparing for a lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected away from north when the electric current from the battery he was using was switched on. At the time of discovery, Ørsted did not suggest any explanation of the phenomenon. However, three later he began more intensive investigations

25.
Thermodynamics
–
Thermodynamics is a branch of science concerned with heat and temperature and their relation to energy and work. The behavior of these quantities is governed by the four laws of thermodynamics, the laws of thermodynamics are explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a variety of topics in science and engineering, especially physical chemistry, chemical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical compounds, Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged in the following decades, statistical thermodynamics, or statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a mathematical approach to the field in his axiomatic formulation of thermodynamics. A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis, the first law specifies that energy can be exchanged between physical systems as heat and work. In thermodynamics, interactions between large ensembles of objects are studied and categorized, central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment and this can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium, non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. Guericke was driven to make a vacuum in order to disprove Aristotles long-held supposition that nature abhors a vacuum. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with English scientist Robert Hooke, using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyles Law was formulated, which states that pressure, later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and he did not, however, follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine, although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Black and Watt performed experiments together, but it was Watt who conceived the idea of the condenser which resulted in a large increase in steam engine efficiency. Drawing on all the work led Sadi Carnot, the father of thermodynamics, to publish Reflections on the Motive Power of Fire

26.
Atomic physics
–
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and this comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with power and nuclear weapons, due to the synonymous use of atomic. Physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. As with many fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular. Physics research groups are usually so classified, Atomic physics primarily considers atoms in isolation. Atomic models will consist of a nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules, nor does it examine atoms in a state as condensed matter. It is concerned with such as ionization and excitation by photons or collisions with atomic particles. This means that the atoms can be treated as if each were in isolation. By this consideration atomic physics provides the underlying theory in physics and atmospheric physics. Electrons form notional shells around the nucleus and these are normally in a ground state but can be excited by the absorption of energy from light, magnetic fields, or interaction with a colliding particle. Electrons that populate a shell are said to be in a bound state, the energy necessary to remove an electron from its shell is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy, the atom is said to have undergone the process of ionization. If the electron absorbs a quantity of less than the binding energy. After a certain time, the electron in a state will jump to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, if an inner electron has absorbed more than the binding energy, then a more outer electron may undergo a transition to fill the inner orbital. The Auger effect allows one to multiply ionize an atom with a single photon, there are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however there are no such rules for excitation by collision processes

27.
Nuclear physics
–
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of matter are also studied. Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, discoveries in nuclear physics have led to applications in many fields. Such applications are studied in the field of nuclear engineering, Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of physics to astrophysics, is crucial in explaining the inner workings of stars. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure, in the years that followed, radioactivity was extensively investigated, notably by Marie and Pierre Curie as well as by Ernest Rutherford and his collaborators. By the turn of the physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energy that were observed in gamma. This was a problem for physics at the time, because it seemed to indicate that energy was not conserved in these decays. The 1903 Nobel Prize in Physics was awarded jointly to Becquerel for his discovery and to Marie, Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his investigations into the disintegration of the elements and the chemistry of radioactive substances. In 1905 Albert Einstein formulated the idea of mass–energy equivalence, in 1906 Ernest Rutherford published Retardation of the α Particle from Radium in passing through matter. Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, in 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. The plum pudding model had predicted that the particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe and he likened it to firing a bullet at tissue paper and having it bounce off. As an example, in this model consisted of a nucleus with 14 protons and 7 electrons. The Rutherford model worked well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929

28.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons

29.
Condensed matter physics
–
Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter, where particles adhere to each other. Condensed matter physicists seek to understand the behavior of these phases by using physical laws, in particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics. The field overlaps with chemistry, materials science, and nanotechnology, the theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc. were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of properties of liquids was added to this list, forming the basis for the new. The Bell Telephone Laboratories was one of the first institutes to conduct a program in condensed matter physics. References to condensed state can be traced to earlier sources, as a matter of fact, it would be more correct to unify them under the title of condensed bodies. One of the first studies of condensed states of matter was by English chemist Humphry Davy, Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Daltons atomic theory were not indivisible as Dalton claimed, Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davys lab, successfully liquefied chlorine and went on to all known gaseous elements, except for nitrogen, hydrogen. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to hydrogen and then newly discovered helium. Paul Drude in 1900 proposed the first theoretical model for an electron moving through a metallic solid. Drudes model described properties of metals in terms of a gas of free electrons, the phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Drudes classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch, Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926, shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better able to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. Magnetism as a property of matter has been known in China since 4000 BC, Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the properties of ferromagnets

30.
Plasma (physics)
–
Plasma is one of the four fundamental states of matter, the others being solid, liquid, and gas. Yet unlike these three states of matter, plasma does not naturally exist on the Earth under normal surface conditions, the term was first introduced by chemist Irving Langmuir in the 1920s. However, true plasma production is from the separation of these ions and electrons that produces an electric field. Based on the environmental temperature and density either partially ionised or fully ionised forms of plasma may be produced. The positive charge in ions is achieved by stripping away electrons from atomic nuclei, the number of electrons removed is related to either the increase in temperature or the local density of other ionised matter. Plasma may be the most abundant form of matter in the universe, although this is currently tentative based on the existence. Plasma is mostly associated with the Sun and stars, extending to the rarefied intracluster medium, Plasma was first identified in a Crookes tube, and so described by Sir William Crookes in 1879. The nature of the Crookes tube cathode ray matter was identified by British physicist Sir J. J. The term plasma was coined by Irving Langmuir in 1928, perhaps because the glowing discharge molds itself to the shape of the Crookes tube and we shall use the name plasma to describe this region containing balanced charges of ions and electrons. Plasma is a neutral medium of unbound positive and negative particles. Although these particles are unbound, they are not ‘free’ in the sense of not experiencing forces, in turn this governs collective behavior with many degrees of variation. The average number of particles in the Debye sphere is given by the plasma parameter, bulk interactions, The Debye screening length is short compared to the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, when this criterion is satisfied, the plasma is quasineutral. Plasma frequency, The electron plasma frequency is compared to the electron-neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics, for plasma to exist, ionization is necessary. The term plasma density by itself refers to the electron density, that is. The degree of ionization of a plasma is the proportion of atoms that have lost or gained electrons, even a partially ionized gas in which as little as 1% of the particles are ionized can have the characteristics of a plasma. The degree of ionization, α, is defined as α = n i n i + n n, where n i is the number density of ions and n n is the number density of neutral atoms

31.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, lower energy/frequency means increased time and vice versa, photons of differing frequencies all deliver the same amount of action, but do so in varying time intervals. High frequency waves are damaging to human tissue because they deliver their action packets concentrated in time, the Copenhagen interpretation of Niels Bohr became widely accepted. In the mid-1920s, developments in mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons

32.
Introduction to quantum mechanics
–
Quantum mechanics is the science of the very small. It explains the behaviour of matter and its interactions with energy on the scale of atoms, by contrast, classical physics only explains matter and energy on a scale familiar to human experience, including the behaviour of astronomical bodies such as the Moon. Classical physics is still used in much of science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large and the worlds that classical physics could not explain. This article describes how physicists discovered the limitations of classical physics and these concepts are described in roughly the order in which they were first discovered. For a more complete history of the subject, see History of quantum mechanics, Light behaves in some respects like particles and in other respects like waves. Matter—particles such as electrons and atoms—exhibits wavelike behaviour too, some light sources, including neon lights, give off only certain frequencies of light. Quantum mechanics shows that light, along all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its energies, colours. Since one never observes half a photon, a photon is a quantum, or smallest observable amount. More broadly, quantum mechanics shows that many quantities, such as angular momentum, angular momentum is required to take on one of a set of discrete allowable values, and since the gap between these values is so minute, the discontinuity is only apparent at the atomic level. Many aspects of mechanics are counterintuitive and can seem paradoxical. In the words of quantum physicist Richard Feynman, quantum mechanics deals with nature as She is – absurd, thermal radiation is electromagnetic radiation emitted from the surface of an object due to the objects internal energy. If an object is heated sufficiently, it starts to light at the red end of the spectrum. Heating it further causes the colour to change from red to yellow, white, a perfect emitter is also a perfect absorber, when it is cold, such an object looks perfectly black, because it absorbs all the light that falls on it and emits none. Consequently, a thermal emitter is known as a black body. In the late 19th century, thermal radiation had been well characterized experimentally. However, classical physics led to the Rayleigh-Jeans law, which, as shown in the figure, agrees with experimental results well at low frequencies, physicists searched for a single theory that explained all the experimental results. The first model that was able to explain the full spectrum of radiation was put forward by Max Planck in 1900

33.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular

34.
Special relativity
–
In physics, special relativity is the generally accepted and experimentally well-confirmed physical theory regarding the relationship between space and time. In Albert Einsteins original pedagogical treatment, it is based on two postulates, The laws of physics are invariant in all inertial systems, the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed in 1905 by Albert Einstein in the paper On the Electrodynamics of Moving Bodies, as of today, special relativity is the most accurate model of motion at any speed. Even so, the Newtonian mechanics model is useful as an approximation at small velocities relative to the speed of light. Not until Einstein developed general relativity, to incorporate general frames of reference, a translation that has often been used is restricted relativity, special really means special case. It has replaced the notion of an absolute universal time with the notion of a time that is dependent on reference frame. Rather than an invariant time interval between two events, there is an invariant spacetime interval, a defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other, rather space and time are interwoven into a single continuum known as spacetime. Events that occur at the time for one observer can occur at different times for another. The theory is special in that it applies in the special case where the curvature of spacetime due to gravity is negligible. In order to include gravity, Einstein formulated general relativity in 1915, Special relativity, contrary to some outdated descriptions, is capable of handling accelerations as well as accelerated frames of reference. e. At a sufficiently small scale and in conditions of free fall, a locally Lorentz-invariant frame that abides by special relativity can be defined at sufficiently small scales, even in curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest, Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been recently observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light and the independence of physical laws from the choice of inertial system, the Principle of Invariant Light Speed –. Light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. That is, light in vacuum propagates with the c in at least one system of inertial coordinates. Following Einsteins original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations, however, the most common set of postulates remains those employed by Einstein in his original paper

35.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life

36.
String theory
–
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. It describes how strings propagate through space and interact with each other. On distance scales larger than the scale, a string looks just like an ordinary particle, with its mass, charge. In string theory, one of the vibrational states of the string corresponds to the graviton. Thus string theory is a theory of quantum gravity, String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. Despite much work on problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows to choose the details. String theory was first studied in the late 1960s as a theory of the nuclear force. Subsequently, it was realized that the properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. The earliest version of string theory, bosonic string theory, incorporated only the class of known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. In late 1997, theorists discovered an important relationship called the AdS/CFT correspondence, one of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, and these issues have led some in the community to criticize these approaches to physics and question the value of continued research on string theory unification. In the twentieth century, two theoretical frameworks emerged for formulating the laws of physics, one of these frameworks was Albert Einsteins general theory of relativity, a theory that explains the force of gravity and the structure of space and time. The other was quantum mechanics, a different formalism for describing physical phenomena using probability. In spite of successes, there are still many problems that remain to be solved. One of the deepest problems in physics is the problem of quantum gravity. The general theory of relativity is formulated within the framework of classical physics, in addition to the problem of developing a consistent theory of quantum gravity, there are many other fundamental problems in the physics of atomic nuclei, black holes, and the early universe. String theory is a framework that attempts to address these questions

37.
Chemistry
–
Chemistry is a branch of physical science that studies the composition, structure, properties and change of matter. Chemistry is sometimes called the science because it bridges other natural sciences, including physics. For the differences between chemistry and physics see comparison of chemistry and physics, the history of chemistry can be traced to alchemy, which had been practiced for several millennia in various parts of the world. The word chemistry comes from alchemy, which referred to a set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism. An alchemist was called a chemist in popular speech, and later the suffix -ry was added to this to describe the art of the chemist as chemistry, the modern word alchemy in turn is derived from the Arabic word al-kīmīā. In origin, the term is borrowed from the Greek χημία or χημεία and this may have Egyptian origins since al-kīmīā is derived from the Greek χημία, which is in turn derived from the word Chemi or Kimi, which is the ancient name of Egypt in Egyptian. Alternately, al-kīmīā may derive from χημεία, meaning cast together, in retrospect, the definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term chymistry, in the view of noted scientist Robert Boyle in 1661, in 1837, Jean-Baptiste Dumas considered the word chemistry to refer to the science concerned with the laws and effects of molecular forces. More recently, in 1998, Professor Raymond Chang broadened the definition of chemistry to mean the study of matter, early civilizations, such as the Egyptians Babylonians, Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but didnt develop a systematic theory. Greek atomism dates back to 440 BC, arising in works by such as Democritus and Epicurus. In 50 BC, the Roman philosopher Lucretius expanded upon the theory in his book De rerum natura, unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. He formulated Boyles law, rejected the four elements and proposed a mechanistic alternative of atoms. Before his work, though, many important discoveries had been made, the Scottish chemist Joseph Black and the Dutchman J. B. English scientist John Dalton proposed the theory of atoms, that all substances are composed of indivisible atoms of matter. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current, british William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhlers synthesis of urea which proved that organisms were, in theory

38.
Inorganic chemistry
–
Inorganic chemistry deals with the synthesis and behavior of inorganic and organometallic compounds. This field covers all chemical compounds except the myriad organic compounds, the distinction between the two disciplines is far from absolute, as there is much overlap in the subdiscipline of organometallic chemistry. It has applications in every aspect of the industry, including catalysis, materials science, pigments, surfactants, coatings, medications, fuels. Many inorganic compounds are compounds, consisting of cations and anions joined by ionic bonding. Examples of salts are magnesium chloride MgCl2, which consists of magnesium cations Mg2+ and chloride anions Cl−, or sodium oxide Na2O, in any salt, the proportions of the ions are such that the electric charges cancel out, so that the bulk compound is electrically neutral. The ions are described by their state and their ease of formation can be inferred from the ionization potential or from the electron affinity of the parent elements. Important classes of compounds are the oxides, the carbonates, the sulfates. Many inorganic compounds are characterized by high melting points, inorganic salts typically are poor conductors in the solid state. Other important features include their high meilting point and ease of crystallization, where some salts are very soluble in water, others are not. The simplest inorganic reaction is double displacement when in mixing of two salts the ions are swapped without a change in oxidation state, in redox reactions one reactant, the oxidant, lowers its oxidation state and another reactant, the reductant, has its oxidation state increased. The net result is an exchange of electrons, electron exchange can occur indirectly as well, e. g. in batteries, a key concept in electrochemistry. When one reactant contains hydrogen atoms, a reaction can take place by exchanging protons in acid-base chemistry, as a refinement of acid-base interactions, the HSAB theory takes into account polarizability and size of ions. Inorganic compounds are found in nature as minerals, soil may contain iron sulfide as pyrite or calcium sulfate as gypsum. Inorganic compounds are also found multitasking as biomolecules, as electrolytes, the first important man-made inorganic compound was ammonium nitrate for soil fertilization through the Haber process. Inorganic compounds are synthesized for use as such as vanadium oxide and titanium chloride. Subdivisions of inorganic chemistry are organometallic chemistry, cluster chemistry and bioinorganic chemistry and these fields are active areas of research in inorganic chemistry, aimed toward new catalysts, superconductors, and therapies. Inorganic chemistry is a highly practical area of science, traditionally, the scale of a nations economy could be evaluated by their productivity of sulfuric acid. The manufacturing of fertilizers is another application of industrial inorganic chemistry

39.
Organic chemistry
–
Study of structure includes many physical and chemical methods to determine the chemical composition and the chemical constitution of organic compounds and materials. In the modern era, the range extends further into the table, with main group elements, including, Group 1 and 2 organometallic compounds. They either form the basis of, or are important constituents of, many products including pharmaceuticals, petrochemicals and products made from them, plastics, fuels and explosives. Before the nineteenth century, chemists generally believed that compounds obtained from living organisms were endowed with a force that distinguished them from inorganic compounds. According to the concept of vitalism, organic matter was endowed with a vital force, during the first half of the nineteenth century, some of the first systematic studies of organic compounds were reported. Around 1816 Michel Chevreul started a study of soaps made from various fats and he separated the different acids that, in combination with the alkali, produced the soap. Since these were all compounds, he demonstrated that it was possible to make a chemical change in various fats, producing new compounds. In 1828 Friedrich Wöhler produced the chemical urea, a constituent of urine, from inorganic starting materials. The event is now accepted as indeed disproving the doctrine of vitalism. In 1856 William Henry Perkin, while trying to manufacture quinine accidentally produced the organic dye now known as Perkins mauve and his discovery, made widely known through its financial success, greatly increased interest in organic chemistry. A crucial breakthrough for organic chemistry was the concept of chemical structure, ehrlich popularized the concepts of magic bullet drugs and of systematically improving drug therapies. His laboratory made decisive contributions to developing antiserum for diphtheria and standardizing therapeutic serums, early examples of organic reactions and applications were often found because of a combination of luck and preparation for unexpected observations. The latter half of the 19th century however witnessed systematic studies of organic compounds, the development of synthetic indigo is illustrative. The production of indigo from plant sources dropped from 19,000 tons in 1897 to 1,000 tons by 1914 thanks to the methods developed by Adolf von Baeyer. In 2002,17,000 tons of indigo were produced from petrochemicals. In the early part of the 20th Century, polymers and enzymes were shown to be large organic molecules, the multiple-step synthesis of complex organic compounds is called total synthesis. Total synthesis of natural compounds increased in complexity to glucose. For example, cholesterol-related compounds have opened ways to synthesize complex human hormones, since the start of the 20th century, complexity of total syntheses has been increased to include molecules of high complexity such as lysergic acid and vitamin B12

40.
Analytical chemistry
–
Analytical chemistry studies and uses instruments and methods used to separate, identify, and quantify matter. In practice separation, identification or quantification may constitute the entire analysis or be combined with another method, qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern, classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation, then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte, Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has applications to forensics, medicine, science. Analytical chemistry has been important since the days of chemistry, providing methods for determining which elements. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen, most of the major developments in analytical chemistry take place after 1900. During this period instrumental analysis becomes progressively dominant in the field, in particular many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar line of development and also become increasingly transformed into high performance instruments. In the 1970s many of these began to be used together as hybrid techniques to achieve a complete characterization of samples. Lasers have been used in chemistry as probes and even to initiate. Modern analytical chemistry is dominated by instrumental analysis, many analytical chemists focus on a single type of instrument. Academics tend to focus on new applications and discoveries or on new methods of analysis. The discovery of a present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time and this is particularly true in industrial quality assurance, forensic and environmental applications

41.
Physical chemistry
–
Some of the relationships that physical chemistry strives to resolve include the effects of, Intermolecular forces that act upon the physical properties of materials. Reaction kinetics on the rate of a reaction, the identity of ions and the electrical conductivity of materials. Surface chemistry and electrochemistry of cell membranes, interaction of one body with another in terms of quantities of heat and work called thermodynamics. Number of phases, number of components and degree of freedom can be correlated with one another with help of phase rule, the key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems. Predicting the properties of compounds from a description of atoms. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them, spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter. Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and it can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes, however, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium. Which reactions do occur and how fast is the subject of chemical kinetics, in general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, the precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities. The term physical chemistry was coined by Mikhail Lomonosov in 1752, modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper and this paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs phase rule. Other milestones include the subsequent naming and accreditation of enthalpy to Heike Kamerlingh Onnes, together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded with the Nobel Prize in Chemistry between 1901–1909, developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, cathedrals of Science The Cambridge History of Science, The modern physical and mathematical sciences

42.
Supramolecular chemistry
–
Supramolecular chemistry is the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. While traditional chemistry focuses on the covalent bond, supramolecular chemistry examines the weaker and these forces include hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi-pi interactions and electrostatic effects. The study of non-covalent interactions is crucial to understanding many biological processes from cell structure to vision that rely on these forces for structure, biological systems are often the inspiration for supramolecular research. The existence of forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistrys philosophical roots, in 1894, Fischer suggested that enzyme-substrate interactions take the form of a lock and key, the fundamental principles of molecular recognition and host-guest chemistry. In the early twentieth century noncovalent bonds were understood in more detail, with the hydrogen bond being described by Latimer. The use of these led to an increasing understanding of protein structure. The use of noncovalent bonds is essential to replication because they allow the strands to be separated and used to template new double stranded DNA, concomitantly, chemists began to recognize and study synthetic structures based on noncovalent interactions, such as micelles and microemulsions. Eventually, chemists were able to take these concepts and apply them to synthetic systems, the breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen. Following this work, other such as Donald J. The development of selective host-guest complexes in particular, in which a host molecule recognizes, the emerging science of nanotechnology also had a strong influence on the subject, with building blocks such as fullerenes, nanoparticles, and dendrimers becoming involved in synthetic systems. Supramolecular chemistry deals with interactions, and consequently control over the processes involved can require great precision. In particular, noncovalent bonds have low energies and often no activation energy for formation, as demonstrated by the Arrhenius equation, this means that, unlike in covalent bond-forming chemistry, the rate of bond formation is not increased at higher temperatures. In fact, chemical equilibrium equations show that the low bond energy results in a shift towards the breaking of supramolecular complexes at higher temperatures, however, low temperatures can also be problematic to supramolecular processes. Supramolecular chemistry can require molecules to distort into thermodynamically disfavored conformations, in addition, the dynamic nature of supramolecular chemistry is utilized in many systems, and cooling the system would slow these processes. Thus, thermodynamics is an important tool to design, control, perhaps the most striking example is that of warm-blooded biological systems, which entirely cease to operate outside a very narrow temperature range. The molecular environment around a system is also of prime importance to its operation. For this reason, the choice of solvent can be critical, molecular self-assembly is the construction of systems without guidance or management from an outside source

43.
Environmental chemistry
–
Environmental chemistry is the scientific study of the chemical and biochemical phenomena that occur in natural places. It should not be confused with green chemistry, which seeks to reduce pollution at its source. Environmental chemistry is the study of processes occurring in the environment which are impacted by humankinds activities. The focus in our courses and research activities is upon developing an understanding of the nature of these chemical processes. Environmental chemistry involves first understanding how the uncontaminated environment works, which chemicals in what concentrations are present naturally, without this it would be impossible to accurately study the effects humans have on the environment through the release of chemicals. Environmental chemists draw on a range of concepts from chemistry and various environmental sciences to assist in their study of what is happening to a species in the environment. Important general concepts from chemistry include understanding chemical reactions and equations, solutions, units, sampling, a contaminant is a substance present in nature at a level higher than fixed levels or that would not otherwise be there. This may be due to activity and bioactivity. The term contaminant is often used interchangeably with pollutant, which is a substance that has an impact on the surrounding environment. Chemical measures of water quality include dissolved oxygen, chemical oxygen demand, biochemical oxygen demand, total dissolved solids, pH, nutrients, heavy metals, soil chemicals and these can include, Heavy metal contamination of land by industry. These can then be transported into water bodies and be taken up by living organisms, nutrients leaching from agricultural land into water courses, which can lead to algal blooms and eutrophication. Urban runoff of pollutants washing off impervious surfaces during rain storms, typical pollutants include gasoline, motor oil and other hydrocarbon compounds, metals, nutrients and sediment. Quantitative chemical analysis is a key part of chemistry, since it provides the data that frame most environmental studies. Common analytical techniques used for determinations in environmental chemistry include classical wet chemistry, such as gravimetric, titrimetric. More sophisticated approaches are used in the determination of trace metals, organic compounds are commonly measured also using mass spectrometric methods, such as Gas chromatography-mass spectrometry and Liquid chromatography-mass spectrometry. Tandem Mass spectrometry MS/MS and High Resolution/Accurate Mass spectrometry HR/AM offer sub part per trillion detection, non-MS methods using GCs and LCs having universal or specific detectors are still staples in the arsenal of available analytical tools. Other parameters often measured in environmental chemistry are radiochemicals and these are pollutants which emit radioactive materials, such as alpha and beta particles, posing danger to human health and the environment. Particle counters and Scintillation counters are most commonly used for these measurements, bioassays and immunoassays are utilized for toxicity evaluations of chemical effects on various organisms

44.
Green chemistry
–
Green chemistry overlaps with all subdisciplines of chemistry but with a particular focus on chemical synthesis, process chemistry, and chemical engineering, in industrial applications. To a lesser extent, the principles of chemistry also affect laboratory practices. The overarching goals of green chemistry—namely, more resource-efficient and inherently safer design of molecules, materials, products, the set of concepts now recognized as green chemistry coalesced in the mid- to late-1990s, along with broader adoption of the term. In 1998, Paul Anastas and John C, Warner published a set of principles to guide the practice of green chemistry. The twelve principles of chemistry are, It is better to prevent waste than to treat or clean up waste after it is formed. Synthetic methods should be designed to maximize the incorporation of all used in the process into the final product. Wherever practicable, synthetic methodologies should be designed to use and generate substances that possess little or no toxicity to human health, Chemical products should be designed to preserve efficacy of function while reducing toxicity. The use of auxiliary substances should be made wherever possible. Energy requirements should be recognized for their environmental and economic impacts, Synthetic methods should be conducted at ambient temperature and pressure. A raw material or feedstock should be rather than depleting wherever technically and economically practicable. Reduce derivatives – Unnecessary derivatization should be avoided whenever possible, catalytic reagents are superior to stoichiometric reagents. Chemical products should be designed so that at the end of their function they do not persist in the environment, analytical methodologies need to be further developed to allow for real-time, in-process monitoring and control prior to the formation of hazardous substances. Substances and the form of a used in a chemical process should be chosen to minimize potential for chemical accidents, including releases, explosions. Green chemistry is increasingly seen as a tool that researchers must use to evaluate the environmental impact of nanotechnology. Solvents are consumed in quantities in many chemical syntheses as well as for cleaning and degreasing. Traditional solvents are often toxic or are chlorinated, Green solvents, on the other hand, are generally derived from renewable resources and biodegrade to innocuous, often naturally occurring product. Novel or enhanced synthetic techniques can provide improved environmental performance or enable better adherence to the principles of green chemistry. Some further examples of applied green chemistry are supercritical water oxidation, on water reactions, bioengineering is also seen as a promising technique for achieving green chemistry goals

Quantum mechanics is the science of the very small. It explains the behavior of matter and its interactions with energy …

Hot metalwork. The yellow-orange glow is the visible part of the thermal radiation emitted due to the high temperature. Everything else in the picture is glowing with thermal radiation as well, but less brightly and at longer wavelengths than the human eye can detect. A far-infrared camera can observe this radiation.

The diffraction pattern produced when light is shone through one slit (top) and the interference pattern produced by two slits (bottom). The much more complex pattern from two slits, with its small-scale interference fringes, demonstrates the wave-like propagation of light.

During chemical reactions, bonds between atoms break and form, resulting in different substances with different properties. In a blast furnace, iron oxide, a compound, reacts with carbon monoxide to form iron, one of the chemical elements, and carbon dioxide.

Fig. 1: Probability densities corresponding to the wave functions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Brighter areas correspond to higher probability density in a position measurement. Such wave functions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics, and are modes of oscillation as well, possessing a sharp energy and, thus, a definite frequency. The angular momentum and energy are quantized, and take only discrete values like those shown (as is the case for resonant frequencies in acoustics)

In physics, special relativity (SR, also known as the special theory of relativity or STR) is the generally accepted …

Figure 1-14. Galaxy M87 streams out a black-hole-powered jet of electrons and other sub-atomic particles traveling at nearly the speed of light.

The primed system is in motion relative to the unprimed system with constant velocity v only along the x-axis, from the perspective of an observer stationary in the unprimed system. By the principle of relativity, an observer stationary in the primed system will view a likewise construction except that the velocity they record will be −v. The changing of the speed of propagation of interaction from infinite in non-relativistic mechanics to a finite value will require a modification of the transformation equations mapping events in one frame to another.

Event B is simultaneous with A in the green reference frame, but it occurs before A in the blue frame, and occurs after A in the red frame.

Modern physics is the post-Newtonian conception of physics. It implies that classical descriptions of phenomena are …

Classical physics is usually concerned with everyday conditions: speeds much lower than the speed of light, and sizes much greater than that of atoms. Modern physics is usually concerned with high velocities and small distances.

Thermodynamics is a branch of physics concerned with heat and temperature and their relation to energy and work. The …

Annotated color version of the original 1824 Carnot heat engine showing the hot body (boiler), working body (system, steam), and cold body (water), the letters labeled according to the stopping points in Carnot cycle.

Information theory studies the quantification, storage, and communication of information. It was originally proposed by …

A picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they have minor scratches using error detection and correction.