Saturday, September 25, 2010

Michael Hirsh recently ("Our Best Minds Are Failing Us", Newsweek, September 16, 2010) lamented how unwilling mainstream economists are to change their thinking in light of the events of the last few years. Brian Milner, a columnist for The Globe and Mail has added to the chorus of complaints.

Perhaps an opportunity now exists for a new introductory textbook in economics that differs fairly comprehensively from mainstream textbooks. Never mind Colander’s approach of modifying his textbooks at most 15% from the previous editions so that mainstream economists will not reject it. Maybe some enterprising heterdox economist should write an uncompromising introduction to economics that is also up-to-date on current events. Years ago, I listed some textbooks. More textbooks have become available since then, for example, G. C. Harcourt’s The Structure of Post-Keynesian Economics: The Core Contributions of the Pioneers. I am not sure that Luigi L. Pasinetti’s Keynes and the Cambridge Keynesians: A ‘Revolution in Economics’ to be Accomplished counts as a textbook. I’m sure I’m leaving much out.

But I’m not sure the packaging on most of these is what I’d like to see tried. The textbook I have in mind should be fairly thick, have various boxed asides, and have problem sets after every chapter. (The problem sets could include essays questions and have less numerical examples than is common.)

1) Heat vegtable or olive oil over medium-high heat in a wide, deep pot (e.g., a Dutch oven). Cut sausage in half lengthwise, then in half lengthwise again. Dice into small pieces and add to pot.

2) Peel and dice onion, adding it to the pot as you do. Dice celery (including leaves) and pepper, adding them to the pot. (Based on what I've seen, one could also add a diced carrot.)

3) When all vegtables are added, cook about 5 minutes so they soften a little. Add bay leaf, thyme, tumeric if desired chicken (e.g., whole thighs) and rice. Add broth plus water to equal 4 cups. Season with a little salt and pepper.

4) Bring to a boil, cover, reduce heat to simmer and cook 30 or 40 minutes, or until chicken is cooked and rice is tender. (I'm thinking of trying it with frozen peas added with about 20 minutes left.)

Sunday, September 12, 2010

1.0 IntroductionCosma Shalizi says, "It is not true that ergodicity is incompatible with sensitive dependence on initial conditions." This poses some questions for me: Can I give an example of a non-ergodic process that also exhibits sensitive sensitive dependence on initial conditions? Can I give an example of an ergodic process that exhibits sensitive dependence on initail conditions?

This post answers the former question. I consider Newton's method for finding roots of unity in the complex plane. The latter question is probably more important for Cosma's assertion. For now, I cite the Lorenz equations as an example of an ergodic process with the desired sensitive dependence.

Cosma's assertion, "It is not true that non-stationarity is a sufficient condition for non-ergodicity," directly contradicts Paul Davidson. I do not address that contradiction here.

2.0 Newton's MethodNewton's method is an algorithm for finding the zeros of a function. In this post, I illustrate the method with the function:

F(z) = z3 - 1,

where z is a complex number. A complex number can be written as a two-element vector:

z = [x, y]T = x + jy

where j is the square root of negative one. (I've been hanging around electrical engineers.) Likewise, one can consider the function F as a vector of two elements:

F(z) = [f1(z), f2(z)]T

The first component maps the real and imaginary parts of the argument to the real part of the function value:

f1(z) = x3 - 3xy2 - 1

The second component maps to the imaginary component of the function value:

f2(z) = y (3x2 - y2)

Newton's method is for numerically finding a solution to the following equation:

F(z) = 0

In my case, one is searching for the cube roots of unity. The method is an iterative method. An initial guess is refined until successive guesses are close enough together that one is willing to accept that the method has converged. A guess is refined by taking a linear approximation to the function at the guess. That guess is refined by solving for the zero of that linear approximation. The zero is the next iteration.

The derivative of a function, when evaluated at the current iterate, provides the linear approximation. The Jacobian is the two-dimensional equivalent of the derivative. The Jacobian, J, is a matrix with the following elements:

Ji, 1([x, y]T) = dfi([x, y]T)/dx, i = 1, 2.

Ji, 2([x, y]T) = dfi([x, y]T)/dy, i = 1, 2.

Newton's method is specified by the following iterative equation:

zn + 1 = zn - J-1(zn) * F(zn)

3.0 Numeric ExplorationsFigure 1 shows a coloring of the plane based on the application of Newton's method. Each point in the plane can be selected as an initial point. The method is applied, and the point is colored according to which of the three cube roots of unity to which the method converges. Figure 2 shows an enlargement of the region around the indicated point in the northeast of Figure 1. Notice the fractal nature of the regions of convergence.

Figure 1: Fractal Basins of Attraction for Newton's Method

Figure 2: An Enlargement of These Fractal Basins

To exhibit sensitive conditions on initial conditions, I wanted to find nearby points whose trajectory diverges under this dynamical process. Table 1 lists six points selected from Figure 2. They fall into three groups, depending on which root they converge to. I claim that one can find at least three distinct points such that each pair is as close as one wants that each converge to a seperate root.

Table 1: Limit Points for Newton's Method

Initial Guess

Limit Point

0.3899 + j 0.6871

1

0.3938 + j 0.6780

(-1/2) - j (31/2)/2

0.3986 + j 0.6811

(-1/2) + j (31/2)/2

0.4010 + j 0.6868

1

0.3980 + j 0.6908

(-1/2) - j (31/2)/2

0.3943 + j0.6949

(-1/2) + j (31/2)/2

Figure 3 and 4 display the trajectories of the six points selected for Table 1. Apparently the function is very shallow in this region. I had not realized before these explorations that these sorts of trajectories go so far from the origin before returning to converge to a root on the unit circle.

Figure 3: Real Part of Some Time Series From Newton's Method

Figure 4: Imaginary Part of Some Time Series From Newton's Method

4.0 ConclusionsCosma provides this definition, among others, of an ergodic process:

"A ... process is ergodic when ... (almost) all trajectories generated by an ergodic process belong to a single invariant set, and they all wander from every part of that set to every other part..."

This definition is appropriate for both deterministic and stochastic processes.

The three roots of unity constitute the non-wandering (invariant) set for the dynamical system created by the above application of Newton's method. A trajectory that has converged to one of the roots does not wander to any other root. So the process is non-ergodic. Yet which root a process converges to is crucially dependent on the initial conditions. A small variation in the initial conditions leads to a long-term divergence in trajectories. This is especially evident because of the fractal structure of the basins of attraction of the three roots.

I think of the above as close to recreational mathematics. I have not tied the above example into any economics model. Common neoclassical models, such as the Arrow-Debreu model of general equilibrium, fail to tie dynamics down. I find it difficult to see how one who has absorbed this fact and understands the mathematics of dynamical systems can find credible much orthodox teaching in economics.

Sunday, September 05, 2010

"Two souls, alas, do dwell within this breast. The one is ever parting from the other" -– Goethe

"He [i.e., Dickens] told me that all the good simple people in his novels, Little Nell, even the holy simpletons like Barnaby Rudge [Slater comments parenthetically that this must have been Dostoevsky's description, not Dickens' -- indeed] are what he wanted to have been, and his villains were what he was (or rather, what he found in himself), his cruelty, his attacks of causeless enmity towards those who were helpless and looked to him for comfort, his shrinking from those whom he ought to love, being used up in what he wrote. There were two people in him, he told me: one who feels as he ought to feel and one who feels the opposite. From the one who feels the opposite I make my evil characters, from the one who feels as a man ought to feel I try to live my life. Only two people? I asked." -- Fyodor Dostoevsky

I have previouslydescribed agents that assess an action by ranking outcomes among a number of incommensurable dimensions. By Arrow's impossibility theorem, such an agent in general cannot have a single aggregate ranking of the outcome of actions.

I was able to list all best choices for my simple example. That is, for each menu, I listed best choices, with ties being possible. (By the way, a budget constraint is a menu.) If one wants to generalize this approach, one would need to specify methods for specifying best choices when listing all possible menus by hand becomes impractical. Pairwise voting is not a good idea, since the results depend on the voting order in which pairs are compared. Furthermore, one would not want to specify one such method, but allow for many different possibilities.

Ulrich Krause has done this. He calls the method for choosing out of these rankings of different aspects an agent's "character". As I understand it, he allows for these rankings to change, based on the agents experience. And so he ends up with a formal model of opinion dynamics.

I don't know if or how this relates to Akerlof's identity dynamics, but, I think, that would be an interesting question to explore.