Tag: Scilab

A neat way to visualize a real number is to make a sunflower out of it. This is an arrangement of points with polar angles and polar radii (so that the concentric disks around the origin get the number of points proportional to their area). The prototypical sunflower has , the golden ratio. This is about the most uniform arrangement of points within a disk that one can get.

Golden ratio sunflower

But nothing stops us from using other numbers. The square root of 5 is not nearly as uniform, forming distinct spirals.

Square root of 5

The number begins with spirals, but quickly turns into something more uniform.

Euler’s sunflower

The number has stronger spirals: seven of them, due to approximation.

pi sunflower

Of course, if was actually , the arrangement would have rays instead of spirals:

Rational sunflower: 22/7

What if we used more points? The previous pictures have 500 points; here is with . The new pattern has 113 rays: .

pi with 3000 points

Apéry’s constant, after beginning with five spirals, refuses to form rays or spirals again even with 3000 points.

Apéry’s constant, 3000 points

The images were made with Scilab as follows, with an offset by 1/2 in the polar radius to prevent the center from sticking out too much.

Natural cubic spline (considered in Connecting dots naturally), which passes through every data point while minimizing the amount of wiggling, measured by the square of its second derivative. Like this:

Natural cubic spline

A smoothing spline is something in between the above extremes: it insists on neither being a line (i.e., having zero second derivative) nor on passing through given points (i.e., having zero residuals). Instead, it minimizes a weighted sum of both things: the integral of second derivative squared, and the sum of residuals squared. Like this plot, where the red points are given but the spline chooses to interpolate the green ones:

Smoothing spline

I’ll demonstrate a version of a smoothing spline that might not be exactly canonical, but is very easy to implement in Matlab or Scilab (which I prefer to use). As in Connecting dots naturally, assume the knots equally spaced at distance . The natural cubic spline is determined by the values of its second derivative at , denoted . As explained earlier, these can be found from the linear system

where the column on the right contains the amounts by which the derivative of the piecewise linear interpolant through given points jumps at every knot. The notation means the second difference of , for example .

A smoothing spline is also a natural spline, but for a different set of points . One has to find that minimize a weighted sum of and of . The latter integral is easily computed in terms of : it is equal to . Since this quadratic form is comparable to , I’ll work with the latter instead.

The idea is to set up an underdetermined system for and , and let Scilab find a least squares solution of that. Let’s introduce a weight parameter that will determine the relative importance of curvature vs residuals. It is convenient to let , so that both and (second derivative) scale the same way. The terms contributes to the linear system for , since the right hand side now has instead of . This contribution is . Moving it to the left hand-side (since are unknowns) we obtain the following system.

where is the same tridiagonal matrix as above, and is the rectangular Laplacian-type matrix

All in all, the system has unknowns , and equations, reflecting the continuity of first derivative at each interior knot. The lsq command of Scilab finds the solution with the least sum of squares of the unknowns, which is what we are after.

Time for some examples. and can be seen above. Here are more:

lambda = 0.1lambda = 1

As increases, the spline approaches the regression line:

lambda = 10lambda=100

Finally, the Scilab code. It is almost the same as for natural spline; the difference is in five lines from B=... to newy=... The code after that is merely evaluation and plotting of the spline.

Pick two random numbers from the interval ; independent, uniformly distributed. Normalize them to have mean zero, which simply means subtracting their mean from each. Repeat many times. Plot the histogram of all numbers obtained in the process.

Two random numbers normalized to zero mean

No surprise here. In effect this is the distribution of with independent and uniformly distributed over . The probability density function of is found via convolution, and the convolution of with itself is a triangular function.

Repeat the same with four numbers , again subtracting the mean. Now the distribution looks vaguely bell-shaped.

Four random numbers normalized to zero mean

With ten numbers or more, the distribution is not so bell-shaped anymore: the top is too flat.

Ten random numbers normalized to zero mean

The mean now follows an approximately normal distribution, but the fact that it’s subtracted from uniformly distributed amounts to convolving the Gaussian with . Hence the flattened top.

What if we use the median instead of the mean? With two numbers there is no difference: the median is the same as the mean. With four there is.

Four random numbers normalized to zero median

That’s an odd-looking distribution, with convex curves on both sides of a pointy maximum. And with points it becomes even more strange.

To generate a random number uniformly distributed on the interval , one can keep tossing a fair coin, record the outcomes as an infinite sequence of 0s and 1s, and let . Here is a histogram of samples from the uniform distribution… nothing to see here, except maybe an incidental interference pattern.

Sampling the uniform distribution

Let’s note that where has the same distribution as itself, and is independent of . This has an implication for the (constant) probability density function of :

because is the p.d.f. of and is the p.d.f. of . Simply put, is equal to the convolution of the rescaled function with the discrete measure .

Let’s iterate the above construction by letting each be uniformly distributed on instead of being constrained to the endpoints. This is like tossing a “continuous fair coin”. Here is a histogram of samples of ; predictably, with more averaging the numbers gravitate toward the middle.

Sampling the Fabius distribution

This is not a normal distribution; the top is too flat. The plot was made with this Scilab code, putting n samples put into b buckets:

If this plot too jagged, look at the cumulative distribution function instead:

Fabius function

It took just more line of code: plot(linspace(0,1,b),cumsum(c)/sum(c))

Compare the two plots: the c.d.f. looks very similar to the left half of the p.d.f. It turns out, they are identical up to scaling.

Let’s see what is going on here. As before, where has the same distribution as itself, and the summands are independent. But now that is uniform, the implication for the p.d.f of is different:

This is a direct relation between and its antiderivative. Incidentally, if shows that is infinitely differentiable because the right hand side always has one more derivative than the left hand side.

To state the self-similarity property of in the cleanest way possible, one introduces the cumulative distribution function (the Fabius function) and extends it beyond by alternating even and odd reflections across the right endpoint. The resulting function satisfies the delay-differential equation : the derivative is a rescaled copy of the function itself.

Since vanishes at the even integers, it follows that at every dyadic rational, all but finitely many derivatives of are zero. The Taylor expansion at such points is a polynomial, while itself is not. Thus, is nowhere analytic despite being everywhere .

This was, in fact, the motivation for J. Fabius to introduce this construction in 1966 paper Probabilistic Example of a Nowhere Analytic -Function.

It is easy to find the minimum of if you are human. For a computer this takes more work:

Search for the minimum of x^2+16y^2

The animation shows a simplified form of the Nelder-Mead algorithm: a simplex-based minimization algorithm that does not use any derivatives of . Such algorithms are easy to come up with for functions of one variable, e.g., the bisection method. But how to minimize a function of two variables?

A natural way to look for minimum is to slide along the graph in the direction opposite to ; this is the method of steepest descent. But for computational purposes we need a discrete process, not a continuous one. Instead of thinking of a point sliding down, think of a small tetrahedron tumbling down the graph of ; this is a discrete process of flips and flops. The process amounts to the triangle of contact being replaced by another triangle with an adjacent side. The triangle is flipped in the direction away from the highest vertex.

This is already a reasonable minimization algorithm: begin with a triangle ; find the values of at the vertices of ; reflect the triangle away from the highest value; if the reflected point has a smaller value, move there; otherwise stop.

But there’s a problem: the size of triangle never changes in this process. If is large, we won’t know where the minimum is even if eventually covers it. If is small, it will be moving in tiny steps.

Perhaps, instead of stopping when reflection does not work anymore, we should reduce the size of . It is natural to contract it toward the “best” vertex (the one with the smallest value of ), replacing two other vertices with the midpoints of corresponding sides. Then repeat. The stopping condition can be the values of at all vertices becoming very close to one another.

This looks clever, but the results are unspectacular. The algorithm is prone to converge to a non-stationary point where just by an accident the triangle attains a nearly horizontal position. The problem is that the triangle, while changing its size, does not change its shape to fit the geometry of the graph of .

The Nelder-Mead algorithm adapts the shape of the triangle by including the possibility of stretching while flipping. Thus, the triangle can grow smaller and larger, moving faster when the path is clear, or becoming very thin to fit into a narrow passage. Here is a simplified description:

Begin with some triangle .

Evaluate the function at each vertex. Call the vertices where is the worst one (the largest value of ) and is the best.

Reflect about the midpoint of the good side . Let be the reflected point.

If, then we consider moving even further in the same direction, extending the line beyond by half the length of . Choose between and based on where is smaller, and make the chosen point a new vertex of our triangle, replacing .

Else, do not reflect and instead shrink the triangle toward .

Repeat, stopping when we either exceed the number of iterations or all values of at the vertices of triangle become nearly equal.

(The full version of the Nelder-Mead algorithm also includes the comparison of with , and also involves trying a point inside the triangle.)

Rosenbrock’s function

This is Rosenbrock’s function, one of standard torture tests for minimization algorithms. Its graph has a narrow valley along the parabola . At the bottom of the valley, the incline toward the minimum is relatively small, compared to steep walls surrounding the valley. The steepest descent trajectory quickly reaches the valley but dramatically slows down there, moving in tiny zig-zagging steps.

Each is piecewise linear and increasing. At each step of the construction, every line segment of (say, with slope ) is replaced by two segments, with slopes and . Since . Hence, .

Since when , it is easy to understand by considering its values at dyadic rationals and using monotonicity. This is how one can see that:

The difference of values of at consecutive points of is at most . Therefore, is Hölder continuous with exponent .

The difference of values of at consecutive points of is at least . Therefore, is strictly increasing, and its inverse is Hölder continuous with exponent .

It remains to check that almost everywhere. Since is monotone, it is differentiable almost everywhere. Let be a point of differentiability (and not a dyadic rational, though this is automatic). For each there is such that . Let ; this is the slope of on the -dyadic interval containing . Since exists, we must have . On the other hand, the ratio of consecutive terms of this sequence, , is always either or . Such a sequence cannot have a finite nonzero limit. Thus .

Here is another , with .

lambda = 1/8

By making very small, and being more careful with the analysis of , one can make the Hausdorff dimension of the complement of arbitrarily small.

An interesting modification of Salem’s function was introduced by Tukia in Hausdorff dimension and quasisymmetric mappings (1989). For the functions considered above, the one-sided derivatives at every dyadic rational are zero and infinity, which is a rather non-symmetric state of affair. In particular, these functions are not quasisymmetric. But Tukia showed that if one alternates between and at every step, the resulting homeomorphism of becomes quasisymmetric. Here is the picture of alternating construction with ; preliminary stages of construction are in green.

One way is to connect them with straight lines, creating a piecewise linear function:

Piecewise linear interpolant

This is the shortest graph of a function that interpolates the data. In other words, the piecewise linear function minimizes the integral

among all functions with . As is often the case, the length functional can be replaced with the elastic energy

because the piecewise linear (and only it) minimizes it too.

Of course, it is not natural for the connecting curve to take such sharp turns at the data points. One could try to fit a polynomial function to these points, which is guaranteed to be smooth. With 11 points we need a 10th degree polynomial. The result is disappointing:

Interpolating polynomial

It is not natural for a curve connecting the points with to shoot up over . We want a connecting curve that does not wiggle more than necessary.

To reduce the wiggling and remove sharp turns at the same time, one can minimize the bending energy of the function, thinking of its graph as a thin metal rod. This energy is

and the function that minimizes it subject to conditions looks very nice indeed:

Natural cubic spline

The Euler-Lagrange equation for the functional dictates that the fourth derivative of is zero in the intervals between the knots. Thus, is a piecewise cubic polynomial. Also, both and must be continuous for any function with integrable second derivative. More delicate analysis is required for , but it also can be shown to be continuous for minimizing function ; moreover, must vanish at the endpoints and . Taken together, these properties (all derived from the variational problem) complete the description of a natural cubic spline.

It remains to actually construct one. I prefer to think of this process as adding a correction term to the piecewise linear interpolant . Here the spline is shown together with (green) and (magenta).

PL interpolant, correction term, and their sum: cubic spline

On each interval the correction term is a cubic polynomial vanishing at both endpoints. The space of such polynomials is two-dimensional: thus,

on this interval. There are 20 coefficients , to find. At each of 9 knots 1,2,…9 we have two conditions: must have removable singularity and must jump by the amount opposite to the jump of . Since also vanishes at , there are 20 linear equations for 20 unknowns.

It is easier to set up a linear system in terms of . Indeed, the values of at two consecutive knots determine the correction term within: and . This leaves equations (from the jumps of ) for unknowns. The best part is that the matrix of this system is really nice: tridiagonal with dominant diagonal.

One can solve the system for within a for loop, but I used the Scilab solver instead. Here is the Scilab code for the most interesting part: the spline. The jumps of the derivative of the piecewise linear interpolant are obtained from the second order difference of the sequence of y-values.