Econophysics was started in the mid-1990s by several physicists working in the subfield of "statistical mechanics. Unsatisfied with the traditional explanations and approaches of economists – which usually prioritized simplified approaches for the sake of soluble theoretical models over agreement with empirical data – they applied tools and methods from physics, first to try to match financial data sets, and then to explain more general economic phenomena.

One driving force behind econophysics arising at this time was the sudden availability of large amounts of financial data, starting in the 1980s. It became apparent that traditional methods of analysis were insufficient – standard economic methods dealt with homogeneous agents and equilibrium, while many of the more interesting phenomena in financial markets fundamentally depended on "heterogeneous agents and far-from-equilibrium situations.

The term "econophysics" was coined by "H. Eugene Stanley, to describe the large number of papers written by physicists in the problems of (stock and other) markets, in a conference on statistical physics in "Kolkata (erstwhile "Calcutta) in 1995 and first appeared in its proceedings publication in "Physica A 1996.[2][3] The inaugural meeting on econophysics was organised in 1998 in Budapest by "János Kertész and Imre Kondor.

The almost regular meeting series on the topic include: APFA, ECONOPHYS-KOLKATA,[4] Econophysics Colloquium, ESHIA/ WEHIA.

If "econophysics" is taken to denote the principle of applying statistical mechanics to economic analysis, as opposed to a particular literature or network, priority of innovation is probably due to Emmanuel Farjoun and Moshé Machover (1983). Their book Laws of Chaos: A Probabilistic Approach to Political Economy proposes dissolving (their words) the "transformation problem in Marx's political economy by re-conceptualising the relevant quantities as random variables.[5]

For "potential games, it has been shown that an emergence-producing equilibrium based on information via Shannon information entropy produces the same equilibrium measure ("Gibbs measure from statistical mechanics) as a stochastic dynamical equation, both of which are based on "bounded rationality models used by economists. The fluctuation-dissipation theorem connects the two to establish a concrete correspondence of "temperature", "entropy", "free potential/energy", and other physics notions to an economics system. The statistical mechanics model is not constructed a-priori - it is a result of a bounded rational assumption and modeling on existing neoclassical models. It has been used to prove the "inevitability of collusion" result of "Huw Dixon in a case for which the neoclassical version of the model does not predict collusion.[11] Here the demand is increasing, as with "Veblen goods or stock buyers with the ""hot hand" fallacy preferring to buy more successful stocks and sell those that are less successful. [12]

Quantifiers derived from "information theory were used in several papers by econophysicist Aurelio F. Bariviera and coauthors in order to assess the degree in the informational efficiency of stock markets. In a paper published in Physica A[13]

Zunino et al. use an innovative statistical tool in the financial literature: the complexity-entropy causality plane. This Cartesian representation establish an efficiency ranking of different markets and distinguish different bond market dynamics. Moreover, the authors conclude that the classification derived from the complexity-entropy causality plane is consistent with the qualifications assigned by major rating companies to the sovereign instruments. A similar study developed by Bariviera et al.[14] explore the relationship between credit ratings and informational efficiency of a sample of corporate bonds of US oil and energy companies using also the complexity–entropy causality plane. They find that this classification agrees with the credit ratings assigned by Moody's.

Another good example is "random matrix theory, which can be used to identify the noise in financial correlation matrices. One paper has argued that this technique can improve the performance of portfolios, e.g., in applied in "portfolio optimization.[15]

There are also analogies between finance theory and "diffusion theory. For instance, the "Black–Scholes equation for "option pricing is a "diffusion-"advection equation (see however [19][20] for a critique of the Black-Scholes methodology). The Black-Scholes theory can be extended to provide an analytical theory of main factors in economic activities.[17]

Papers on econophysics have been published primarily in journals devoted to physics and statistical mechanics, rather than in leading economics journals. "Mainstream economists have generally been unimpressed by this work.[21] Some economists, including "Mauro Gallegati, "Steve Keen, "Paul Ormerod, and Alan Kirman have shown more interest, but also criticized some trends in econophysics.

In contrast, econophysics is having some impact on the more applied field of "quantitative finance, whose scope and aims significantly differ from those of economic theory. Various econophysicists have introduced models for price fluctuations in "financial markets or original points of view on established models.[19][22][23] Also several scaling laws have been found in various economic data.[24][25][26]

Presently, one of the main results of econophysics comprises the explanation of the ""fat tails" in the distribution of many kinds of financial data as a "universal self-similar "scaling property (i.e. scale invariant over many orders of magnitude in the data),[27] arising from the tendency of individual market competitors, or of aggregates of them, to exploit systematically and optimally the prevailing "microtrends" (e.g., rising or falling prices). These "fat tails" are not only mathematically important, because they comprise the "risks, which may be on the one hand, very small such that one may tend to neglect them, but which - on the other hand - are not neglegible at all, i.e. they can never be made exponentially tiny, but instead follow a measurable algebraically decreasing power law, for example with a failure probability of only P∝x−4,{\displaystyle P\propto x^{-4}\,,} where x is an increasingly large variable in the tail region of the distribution considered (i.e. a price statistics with much more than 108 data). I.e., the events considered are not simply "outliers" but must really be taken into account and cannot be "insured away".[28] It appears that it also plays a role that near a change of the tendency (e.g. from falling to rising prices) there are typical "panic reactions" of the selling or buying agents with algebraically increasing bargain rapidities and volumes.[28] The "fat tails" are also observed in "commodity markets.

As in quantum field theory the "fat tails" can be obtained by complicated ""nonperturbative" methods, mainly by numerical ones, since they contain the deviations from the usual "Gaussian approximations, e.g. the "Black-Scholes theory. Fat tails can, however, also be due to other phenomena, such as a random number of terms in the central-limit theorem, or any number of other, non-econophysics models. Due to the difficulty in testing such models, they have received less attention in traditional economic analysis.

^Farjoun and Machover disclaim complete originality: their book is dedicated to the late Robert H. Langston, who they cite for direct inspiration (page 12), and they also note an independent suggestion in a discussion paper by "E.T. Jaynes (page 239)