Adoption of models of inter temporal and temporary general equilibrium.

Assertion that such General Equilibrium models are not meant to be descriptive and, besides, have their own problems of stability, uniqueness, and determinateness, with no need for Cambridge critiques.

Samuel Hollander's argument for more continuity between classical and neoclassical economics than Sraffians see.

I think I am still ignoring large aspects of the vast literature on the CCC. This post was inspired by Noah Smith's
anti-intellectualism.
Barkley Rosser brings up the CCC in his response to Smith.
I could list references for each point above. I am not sure I could even find a survey article that covered all those points,
maybe not even a single book.

So the CCC presents, to me, a convincing demonstration, through a counter-example to Smith's argument. In the comments to his post,
Robert Waldmann brings up
old, paleo-Keynesian as an interesting rebuttal to a specific point.

Thursday, May 25, 2017

Anthony Giddens, in The Third Way: The Renewal of Social Democracy (1999), advocates a renewed social democracy. He contrasts what he is advocating with neoliberalism,
which he summarizes as, basically, Margaret Thatcher's approach. Giddens recognizes that more flexible labor
markets will not bring full employment and argues that unregulated globalism, including unregulated international
financial markets, is a danger that must be addressed. He stresses the importance of environmental issues, on all
levels from the personal to international. I wish he had something to say about labor unions, which I
thought had an institutionalized role in the Labour Party, before Blair and Brown's "new labour" movement.

Charles Peters had a A Neo-Liberal's Manifesto in 1982. (See also 1983 piece in Washington Monthly.) This was directed
to the Democratic Party in the USA. It argues that they should reject the approach of the New Deal and the Great
Society. Rather, they should put greater reliance on market solutions for progressive ends. I do not think Peters
was aware that the term "neoliberalism" was already taken. Contrasting and comparing other uses with Peters' could
occupy much time.

Anyways, neoliberalism is something more specific than any centrist political philosophy, between socialist central
planning and reactionary ethnic nationalism. George Monbiot has some short,
popularaccounts.
Read Noah Smith
if you want confusion, incoherence, and ignorance, including ignorance of the literature.

This post has nothing to do with economics, albeit it does illustrate emergent behavior. And I have
figures that are an eye test. I am subjectively original. But I assume somebody else has done this -
that I am not objectively original.

This post is an exercise in combinatorics. There are 131,328 life-like Celluar Automata (CA), up to
symmetry.

The GoL is "played", if you can call it that, on an infinite plane divided into equally sized
squares. The plane looks something like a chess board, extended forever. See the left side of
Figure 1, above. Every square, at any moment in time, is in one of two states: alive or dead.
Time is discrete. The rules of the game specify the state of each square at any moment in time,
given the configuration at the previous instant.

The state of a square does not depend solely on its previous state. It also depends on the states
of its neighbors. Two types of neighborhoods have been defined for a CA with a grid of square cells.
The Von Neumann neighbors of a cell are the four cells above it, below it, and to the left and
right. The Moore neighborhood (Figure 2) consists of the Von Neumann neighbors and the four
cells diagonally adjacent to a given cell.

Figure 2: Moore Neighborhood of a Dead Cell

The GoL is defined for Moore neighborhoods. State transition rules can be defined in
terms of two cases:

Dead cells: By default, a dead cell stays dead. If a cell was dead at
the previous moment, it becomes (re)born at the next instant if the number of live
cells in its Moore neighborhood at the previous moment was
x1 or x2 or ... or xn.

Alive Cells: By default, a live cell becomes dead. If a cell was
alive at the previous moment, it remains alive if the number of live cells in
its Moore neighborhood at the previous moment was
y1 or y2 or ... or ym.

The state transition rules for the GoL can be specified by the notation
Bx/Sy. Let x be the concatenation of
the numbers x1, x2, ..., xn.
Let y be the concatenation of
y1, y2, ..., ym.
The GoL is B3/S23. In other words, if exactly three of the neighbors of a dead
cell are alive, it becomes alive for the next time step. If exactly two or
or three of the neighbors of a live cell are alive, it remains alive at the next
time step. Otherwise a dead cell remains dead, and a live cell becomes dead.

The GoL is an example of
recreational mathematics.
Starting with random patterns, one can predict, roughly, the distributions of certain
patterns when the CA settles down, in some sense. On the other hand, the specific
patterns that emerge can only be found by iterating through the GoL, step
by step. And one can engineer certain patterns.

3.0 Life-Like Celluar Automata

For the purposes of this post, a life-like CA is a CA defined with:

A two dimensional grid with square cells and discrete time

Two states for each cell

State transition rules specified for Moore neighborhoods

State transition rules that can be specified by the Bx/Sy notation.

How many life-like CA are there? This is the question that this post attempts
to answer.

The Moore neighborhood of cell contains eight cells. Thus, for each of the
digits 0, 1, 2, 3, 4, 5, 6, 7, and 8, they can appear in Bx.
For each digit, one has two choices. Either it appears in the birth
rule or it does not. Thus, there are 29 birth rules.

The same logic applies to survival rules. There are
29 survival rules.

Each birth rule can be combined with any survival rule.
So there are:

29 29 = 218

life-like CA. But this number is too large. I am double counting, in
some sense.

4.0 Reversing Figure and Ground

Figure 1 shows, side by side, grids from the GoL and from a CA called
Flip Life. Flip Life is specified as B0123478/S01234678. Figure 3
shows a window from a computer program. In the window on the
left, the rules for the GoL are specified. The window on the
right is used to specify Flip Life.

Figure 3: Rules for Life and Flip Life

Flip Life basically renames the states in the GoL. Cells that are called
dead in the GoL are said to be alive in Flip Life. And cells that are alive
in the GoL are dead in Flip Life. In counting the number of life-like CA,
one should not count Flip Life separately from the GoL. In some sense,
they are the same CA.

More generally,
suppose Bx/Sy specifies a life-like CA, and let Bu/Sv be the
life-like CA in which figure and ground are reversed.

For each digit xi in x, 8 - xi
is not in v, and vice versa.

For each digit yj in y, 8 - yj
is not in u, and vice versa.

So for any life-like CA, one can find another symmetrical CA in which dead cells become
alive and vice versa.

5.0 Self Symmetrical CAs

One cannot just divide 218 by two to find the number of life-like CA, up
to symmetry. Some rules define CA that are the same CA, when one reverses figure
and ground. As an example, Figure 4 presents a screen snapshot for the CA called
Day and Night, specified by the rule B1/S7.

Figure 4: Day and Night: An Example of a Self-Symmetrical Cellular Automaton

Given rules for births, one can figure out what the rules must be for survival for the CA to
be self-symmetrical. Thus, there are as many self-symmetrical life-like CAs as there are rules
for births.

6.0 Combinatorics

I bring all of the above together in this section. Table 1 shows a tabulation of the
number of life-like CAs, up to symmetry.

Table 1: Counting Life-Like Celluar Automata

Number

Birth Rules

29

Survival Rules

29

Life-Like Rules

29 29 = 262,144

Self-Symmetric Rules

29

Non-Self-Symmetric Rules

29(29 - 1)

Without Symmetric Rules

28(29 - 1)

With Self-Symmetric Rules Added Back

28(29 + 1) = 131,328

7.0 Conclusion

How many of these 131,328 life-like CA are interesting? Answering this question requires some definition of what makes
a CA interesting. It also requires some means of determining if some CA is in the set so defined. Some CAs are clearly
not interesting. For example, consider a CA in which all cells eventually die off, leaving an empty grid. Or consider
a CA in which, starting with a random grid, the grid remains random for all time, with no defined patterns ever
forming. Somewhat more interesting would be a CA in which patterns grow like a crystal, repeating and duplicating.
But perhaps an interesting definition of an interesting CA would be one that can simulate a Turing machine and
thus may compute any computable function. The GoT happens to be Turing complete.

Acknowledgements: I started with version 1.5 of Edwin Martin's implementation, in Java, of John Conway's Game of Life. I have modified this implementation in several ways.

The theory of the production of commodities by means of commodities imposes one restriction on wage-rate of profits curves:
They should be downward-sloping. They can be of any convexity. They are high-order polynomials, where the order depends
on the number of produced commodities. So no reason exists why they should not change convexity many times in the first
quadrant, where the the rate of profits is positive and below the maximum range of profits. The theory of the choice
of technique suggests that, if multiple processes are available for producing many commodities, many techniques will
contribute to part of the wage-rate of profits frontier.

The empirical research does not show this. When I looked at
all countries or regions
in the world, I found very little visual deviation from straight lines for most wage curves,
for the ruling technique1. The exceptions tended to be undeveloped countries.
Han and Schefold, in their empirical search for capital-theoretic paradoxes in OECD countries,
also found mostly straight curves. And only a few techniques appeared on the frontier.

I have a qualitative explanation of this discrepancy between expectations from theory and empirical results.
The theory I draw on above takes technology as given. It is as if economies are analyzed based on
an instantaneous snapshot. But technology evolves as a dynamic process. The flows among industries and
final demands have been built up over decades, if not centuries.

In advanced economies, technology does not change randomly. Large corporations have Research and Development
departments, universities form extensive networks, and the government sponsors efforts to advance
Technology Readiness Levels2.
Sponsored research is not directed randomly. Technical feasibility is an issue, albeit that changes
over time. Another concern is what is costly at the moment, with cost being defined widely.
I suggest a constant effort to lower a reliance on high cost inputs in production process,
over time, results in coefficients of production being lowered such that wage curves
become more straight3.

The above story suggests that one should develop some mathematical theorems. I am aware
of two areas of research in Sraffian economics that seem promising for further inquiry
along these lines. First, consider Luigi Pasinetti's structural economic dynamics.
I have an
analysis
of hardware and software costs in computer systems, which might be suggestive.
Second, Bertram Schefold has been analyzing the relationship between the shape of wage curves;
random matrices; and eigenvalues, including eigenvalues other than the Perron-Frobenius
root.

I have been moping during my day job how I cannot keep up with some of my fellow software
developers. I return to, say, Java programming after a few years, and there is a whole
new set of tools. And yet, much of what I have learned did not even exist when I received
either of my college degrees. For example, creating an Android app in Android Studio or
IntelliJ involves, minimally, XML, Java, and Virtual Machines for testing. Back in the
1980s, I saw some presentations from Marvin Zelkowitz for what might be described as an
Integrated Development Environment (IDE). He had an editor that understood Pascal
syntax, suggested statement completions, and, if I recall correctly, could be used
to set breakpoints and examine states for executing code. I do not know how this
work fed, for example, Eclipse.

Nowadays, you can specialize in developing web apps4. Some of my co-workers
are Certified Information Systems Security Professionals (CISSPs). They know a lot
of concepts that are sort of orthogonal to programming5. I also know
people that work at Security Operations Centers (SOCs)6. And there
are many other software specialities.

In short, software should no longer be considered a single industry. Glancing
quickly at the
web site
for the Bureau of Economic Analysis, I note the following industries
in the 2007 benchmark input-output tables:

Coders, programmers, and software engineers definitely provide labor inputs in
many other industries. Cybersecurity does not even appear above.

What would input-tables looked like, for software, in the 1970s? I speculate
you might find industries for the manufacture of computers, telecommunication
equipment, and satellites & space vehicles. And data processing would probably
be an industry.

I am thinking that new industries come about, in modern economies, more by
division and greater articulation of existing industries, not by suddenly
creating completely new products. And this can be seen in divisions
and movements in industries in National Income and Product Accounts (NIPA).
One might explore innovation over the last half-century
or so by looking at the evolution of industry taxonomies in
the NIPA.7.

4.0 Conclusion

This post suggests some research directions8.
At this point, I do not intend to pursue either.

Footnotes

Reviewers, several years ago, had three major objections to this paper. One was that I had to offer
some suggestion why wage curves should be so straight. The other two were that I needed to offer
a more comprehensive explanation of how to map from the raw data to the input-output tables
I used and that I had to account for fixed capital and depreciation.

John Kenneth Galbraith's The New Industrial State is a somewhat dated analysis
of these themes.

They also move outward.

The web is not old. Tools like Glassfish, Tomcat, and JBoss, and their commercial
competitors are neat.

Such as Confidentiality, Integrity, and Availability; two-factor identification;
Role-Based Access Control; taxonomies for vulnerabilities and intrusions; Public
Key Infrastructure; symmetric and non-symmetric encryption; the Risk Management
Framework (RMF) for Information Assurance (IA) Certification and Accreditation;
and on and on.

A SOC differs from a Network Operations Center. Operators of a SOC
have to know about host-based and network-based Intrusion Detection,
Security Incident and Event Management (SIEM) systems, Situation
Awareness, forensics, and so on.

One should be aware that part of the growth on the tracking of industries
might be because computer technology has evolved. Von Neumann worried
about numerical methods for calculating matrix inverses. Much bigger
matrices are practical now.

Saturday, May 06, 2017

This post extends the results from my last
post.
I think of the results presented here as providing information about the implementation
of my simulation.
I do not claim any implications
about actually existing economies. I did not have any definite anticipations about what I
would see. I suppose it could be of interest to regenerate these results where coefficients
of production are randomly generated from some non-uniform distribution.

I continue to use a
capability to generate a random economy, where such an economy is characterized
by a single technique.
A technique is specified by a row vector of labor coefficients and a corresponding square
Leontief input-output matrix.
The labor coefficients are randomly generated from a uniform distribution
on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated
from a uniform distribution on [0.0, 1.0). The random number generator is as provided
by the class java.util.Random, in the Java programming language. I am running Java
version 1.8.

Each random economy is tested for viability. Non-viable economies are discarded.
Table 1 shows how many economies needed to be generated, given the number
of produced commodities, to end up with a sample size of 300 viable economies.
The
maximum rate of profits is calculated for each viable economy. The maximum rate of
profits occurs when the wage is zero, and the workers live on air.
Thus, labor coefficients do not matter for the calculation of the maximum rate of
profits.

Table 1: Number of Simulated Economies

Seed forRandomGenerator

Number ofCommodities

Number ofEconomies

368,424,234

2

610

345,657

3

6,124

4,566,843

4

826,471

547,527

5

> 231 - 1

I looked at the distribution of the maximum rate of profits, calculated as a percentage, in several ways. Figure 2
presents four histograms, superimposed on one another. Figure 1 expands the left tails of these histograms. I suppose
Figure 2 is somewhat easier to make sense of than Figure 1, when you click on the image. Maybe the statistics in Tables 2 and 3
are clearer. One can see, for example, in random economies in which two commodities are produced, the mean of the
maximum rate of profits is 43.9%. The minimum, in these 300 random economies, of the maximum rate of profits
is about 0.03% and the maximum is 318%. If I wanted to be more thorough, I would have to review how skewness
and kurtosis are calculated by default in the Java class
org.apache.commons.math3.stat.descriptive.DescriptiveStatistics. The coefficient of variation is the
ratio of the standard deviation to the mean. The nonparametric analogy, reported in the last row in
Table 3, is the ratio of the Inter-Quartile Range to the median.
Anyways, the distribution of the maximum rate of profits, in random viable economies generated by
the simulation, is non-Gaussian and highly skewed, with a tail extending
to the right.

Figure 2: Distribution of Maximum Rate of Profits

Table 2: Parametric Statistics

Number of Produced Commodities

Two

Three

Four

Five

Sample Size

300

300

300

300

Mean

43.9

15.7

8.28

4.95

Std. Dev.

50.2

19.3

7.53

5.90

Skewness

2.10

3.89

1.22

2.63

Kurtosis

5.14

22.2

0.882

9.64

Coef. of Var.

0.875

0.811

1.10

0.839

Table 3: Nonparametric Statistics

Number of Produced Commodities

Two

Three

Four

Five

Minimum

0.0327

0.113

0.0107

0.00405

1st Quartile

9.35

4.51

2.52

1.17

Median

25.3

9.72

5.70

2.99

3rd Quartile

57.3

19.9

11.3

6.27

Maximum

318

168

36.2

44.2

IQR/Median

1.90

1.58

1.54

1.70

With the simulation, the maximum rate of profits tends to be smaller, the more commodities
are produced. I wish I could extend these results to a lot more produced commodities.
National Income and Product Accounts (NIPAs), at the grossest level of aggregation have
on the order of 100 produced commodities.
Even if results with the assumption of an arbitrary probability
distribution for coefficients of production could be directly applied empirically,
one would like confirmation that trends seen with a very small number of produced
commodities continue.

Wednesday, May 03, 2017

I have begun working towards replicating certain simulation results reported by Stefano Zambelli's.

At this point, I have
implemented a capability to generate a random economy, where such an economy is characterized
by a single technique.
A technique is specified by a row vector of labor coefficients and a corresponding square
Leontief input-output matrix.
The labor coefficients are randomly generated from a uniform distribution
on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated
from a uniform distribution on [0.0, 1.0). The random number generator is as provided
by the class java.util.Random, in the Java programming language. I am running Java
version 1.8.

A Monte Carlo simulation, in the results reported here, tests each random economy for
viability, where the technique, for each economy, is used to produce a specified
number of commodities. A viable economy can reproduce the inputs used up in producing
the outputs. If the economy is just viable, nothing is left over to pay the workers
and the capitalists. The Hawkins-Simon condition can be used to check for viability.

Table 1 reports the results. The number of Monte Carlo runs, for each row, is
1,000,000,000. The seed is reported so I can replicate my results, if I want.
I think I can provide a symmetry argument for why the probability for the
first row should be 1/2. I reran the simulation for the last row with
2,000,000,000 runs and the same seed. I still found zero viable economies.

Table 1: Simulation Results

Seed forRandomGenerator

Number ofCommodities

Number ofViableEconomies

Probability

46,576,889

2

499,967,476

49.9967476%

89,058,538

3

50,198,690

5.019869%

7,586,338

4

372,339

0.0372339

784,054

5

99

0.0000099%

568,233,269

6

0

0%

Zambelli suggests randomly specifying a rescaled output, in some sense, for the technology
so as to ensure viability. I have a rough conceptual understanding of this step, but I
need a better understanding to reduce it to source code. I think I'll go on to further analyses
before revisiting the issue of viability. The above results certainly suggest that my
analyses will be limited, in the mean time, to economies that produce only two, three, or maybe
four commodities.

I think that Zambelli's approach is worthwhile for pursuing the results in which he
is interested. One limitation arises with applying a probability distribution to one
particular description of technology. In practice, coefficients of production evolve
in a non-random manner.
Pasinetti's structural dynamics is a good way
of exploring technical progress
in the tradition of Sraffa.