Interdependent Science

Wednesday, November 16, 2016

State Reachability is an important problem in computer science, with many applications in engineering. It is the simplest problem in the broader class of Model Checking problems. Principles of Model Checking looks like a good book to learn about model checking. Ken McMillan has been a pioneer in model checking algorithms from the very early days, in the 1980s. Among his many publications, I would refer specifically to his 2003 paper Interpolation and SAT-Based Model Checking which uses SAT-based methods. I will sketch out here another SAT-based method which should complement the interpolation method of McMillan.

State Reachability is an extremely difficult problem, in that for every algorithm there will be some relatively small instances where that algorithm takes a very long time, and a huge amount of memory, to solve that instance. Of course, computer science has still not proved that hard problems actually exist – this is the famous P=NP? theorem, as yet unproved – but if any hard problems exist, State Reachability is hard.

Satisfiability, or SAT, is another hard problem – indeed, it is the prime example of a hard problem. State Reachability is like the big brother of Satisfiability. Satisfiability relates to Boolean functions. A Boolean function is some sort of rule or formula that determines a 1 or 0 result for each possible value of some input. The input of a Boolean function is generally an array of 1/0 values. So a simple Boolean function might map the four input values {000, 011, 101, 110} to the result 0, and {001, 010, 100, 111} to 1. When there are only three bits in the input array, as in this example, it is easy to list all eight values of the inputs, compute the value of the function, and determine whether the result is always 0 or whether there are any input values that have a 1 result. As the number of bits in the input array gets larger, say 30 or 100 or 300 bits, the number of input values grows so quickly that it becomes practically impossible to consider each input value individually. Satisfiability algorithms are some of the oldest algorithms in computer science and continue to be improved. SatLive is a good site to find out about recent work in this field.

The State Reachability problem builds on the Satisfiability problem. State Reachability works with a finite state machine, constructed around a more complex Boolean function, one that maps an input array of 1/0 values to an output array of 1/0 values. Rather than computing a single output value from a single input value, a finite state machine performs a series of computations using the same Boolean function in each step. The output of each step becomes part of the input of the next step. This is essentially how digital electronic circuits, such as computers, work. Other engineering problems can also be abstracted to appear in this guise.

A finite state machine can be pictured like this:

Here the five squares on the left represent the present state of the machine, and the five squares on the right are the next state. The two value on the upper left are the inputs to the machine that steer the machine down one path or another. The grey blob in the middle represents the Boolean function that computes the next state of the machine on the basis of the present state and the machine's input values.

Each value of the output array is a state of the finite state machine. As the machine operates, the machine moves from one state to another. The State Reachability problem asks whether there is any way that the machine can start from a given initial state and eventually reach a given target state.

The behavior of a state machine can be illustrated with a state transition graph:

Here the green dot is the initial state and the red dot is the target state. It is easy to see that there is no path from the initial state to the target state. This kind of inspection gets much harder as the number of states of the machine grows. Each state is a Boolean vector value for the bits of the machine, so every new bit in the machine will double the number of states. Once the number of state bits grows above a few dozen, even a computer will have a hard time recording the complete state graph.

Each step of the machine’s operation involves the evaluation of a Boolean function. To determine whether the target state can be reached in a single step, in one evaluation of the Boolean function, is a Satisfiability problem. The behavior of the machine across two, three, or any fixed number of steps, can again be posed as a Satisfiability problem, just with a larger Boolean function. But as machines grow, the potential number of steps between states also grows. So deciding whether a target state can eventually be reached from an initial state, the State Reachability problem, gets very difficult very quickly as the size of the finite state machine grows.

The first quick and simple approach to checking whether a target state can be reached is to perform random simulation. This allows many states to be explored very quickly. If the target state is easy to reach, random simulation may well find a path to it.

Another approach is to use a Satisfiability checker to see whether there is a reasonably short path from the initial state to the target state. Paths that are too long will be too computationally expensive for the Satisfiability checker. It is simple enough to set up check to see if there is any path to the target that is less than some maximum number of steps. If a path is found, the State Reachability problem is thereby solved! If no path is found, a modern Satisfiability checker will be able to provide a minimized proof. This proof will typically not include every state variable in the machine. The algorithm I am proposing here requires the selection of some relevant state bit to be extracted from this proof.

The algorithm I am proposing here involves the incremental refinement of an abstract state machine. The states of the given finite state machine are partitioned; each block of the partition constitutes a state of the abstract state machine. The abstract state machine is refined incrementally by splitting blocks one at a time. Each block is defined by a partial evaluation of the state bits. A block is split by picking one of the unvalued bits of the block. The two new blocks are formed by given the selected state bit the 0 and the 1 value in turn. The state bit used for splitting will be picked from the various proofs of unsatisfiability generated by the Satisfiability checker as the algorithm proceeds.

The abstract state machine guides the exploration of the concrete state machine that defines the problem. A collection of reached concrete states is maintained. Each of these reached states belongs to a block in the partition that defines the abstract state machine, i.e. maps to an abstract state. The idea is to find ways to reach as many abstract states as possible. If a path cannot be found from an already reached state in one block of the partition to another block In the partition, it may be because there are no paths between any pair of states in those two blocks. If there are no possible paths in the abstract state machine between the block containing the initial state and the block containing the target state, then there are no paths in the concrete machine either. Discovering this would solve this State Reachability problem instance with a negative result.

When some states in a block can reach states in another block, but none of the reached states can, then the block should be refined. The hope is that one of the split off blocks will have all the reached states, while the other block has all the states that have paths to the unreached block. The idea is to work to keep as sparse as possible the state transition graph of the abstract state machine. This sparseness will help guide the Satisfiability checker to focus on abstract paths that have a good likelihood of mapping to concrete paths, or, on the other hand, will result in an abstract state graph with no path at all between the initial block and the target block.

I'll go through the pieces of the algorithm again, in pictures. First, random simulation is used to explore states that are easily reached from the initial state:

The abstract state machine is initially trivial, consisting of a single block, corresponding to a partial evaluation of the state bits that has none of the bits given a value:

The first run of the Satisfiability checker can look for paths from the initial state to the target state. If none are found, a bit can be selected from the proof of unsatisfiability. This bit can then be used to split the block:

Now the Satisfibility checker can be used to look for paths to states in a block that has not yet been reached:

Whenever interesting new states have been reached by using the Satisfiability checker, it can be worthwhile to explore further by running random simulation from these new states.

The Satisfiability checker is used to look for paths to the target or to unreached blocks, starting from reached states that are closest to these goals. When a path cannot be found, the resulting unsatisfiability proof can be used to pick a bit to use in splitting the block:

Before splitting a block, though, one should check whether any of the states in the starting block have a path to a state in the goal block. If not, the abstract state machine can mark as impossible that transition:

Eventually the abstract state graph will chop the path from the initial state to the target state into small enough pieces that a combination of random simulation and Satisfiability will find a path, or enough abstract states will be missing enough transitions that the target block will be unreachable from the initial block in the abstract machine. This, this algorithm will eventually determine an answer to the given State Reachability problem instance.

Tuesday, October 11, 2016

I read E. O. Wilson’s book Consilience around the time it came out, in 1999. I still find that kind of triumphalism annoying. It’s easy to doubt that “single vision & Newton’s sleep” can ever extend its reach to encompass the full range of human experience, as Wilson argues against Blake, but it is a bit more difficult to see where the obstacles lie. I recall Wilson explicitly addressing one potential limit, but I have never been able to find this discussion again. Did I just dream it? In any case, the puzzle arises out of the fundamental structure of the book’s argument: science gives us knowledge about the world of unprecedented accuracy; and science teaches us that human behavior emerges out of evolutionary pressures, the various claims of transcendence or nobility or rationality having little more validity than the chest thumping of an ape.

The puzzling questions that come out of these ideas are then: “Are not scientists humans? Do scientific theories themselves have no more validity than chest thumping? Since even the chest thumping theory is mere chest thumping, hasn’t science cut away its own foundations and left us floating?” In my dream-like recollection, Wilson responds that the scientific method has given scientists an escape ladder for transcending the human condition. Of course, this claim raises more questions than it answers, but I don’t recall Wilson taking the discussion any further.

Nowadays the popular image of science seems to be more and more tarnished. On the one hand, the great power of science has inevitably brought corruption, both real and imagined. For example, tests of drug effectiveness seem to succumb occasionally to the optimistic bias of the manufacturer. On the other hand, the huge stakes involved in an issue like climate change induce powerful economic and political forces to cast doubt on the scientific theories involved, however pristine the methods that generated those theories.

Can we ever free ourselves from our biases, to get a clear objective look at reality? Perhaps the most honest approach to science will turn out to look more like professional wrestling, a no-holds barred battle of ideas, where the results are determined more by back room deals than the circus on public display. This dichotomy between transcendence and baseness is not limited to our quest for truth. Can our efforts to act in a morally good way ever be free from our base urges, our greed and violence? Is all nobility and virtue really a lie, a mere disguise worn to hide a base grab for power?

The 2016 presidential contest in the USA seems to exemplify this battle between the hope for transcendence and the frank admission of our base nature. Clinton represents the bureaucracy that inevitably arises to administer and regulate our efforts to climb toward the shining castle of the good and the true. Trump represents the raw honest direct grab for power.

There does seem to be a generational cycle between these poles, a laStrauss & Howe. As we approach one pole, its flaws become very apparent, while the opposite pole presents the eternal illusion of the greener grass on the other side. Can we ever develop the wisdom to realize that every system has flaws, that real progress comes from working closely with those flaws rather than running away from them?

Monday, May 16, 2016

One way to move data from one computer to another is through an electronic network. Another way is for the source computer to write the data to some physical medium, e.g. a USB flash drive, then for someone to carry that flash drive over to the destination computer, which can then read the data from the device. That's "sneaker net" - you can move the data faster if you are wearing running shoes!

One of the challenges with energy in general is that places rich in energy are often far from the consumers of that energy. Of course, over time the consumers tend to relocate closer to the sources of energy, e.g. towns spring up where river flow can be dammed or where coal is nearby. Canals and rail lines can be built to carry coal, and pipelines can be built to carry petroleum and natural gas.

Electric transmission lines are another way to move energy. Rather than moving coal by rail, coal burning power plants are sometimes built near coal mines and the generated power moved across those long wires.

Moving energy is particularly difficult when the sources are widely distributed or rapidly changing. For example, natural gas fields are located in remote regions of Siberia. It is not cost-effective to build the very long pipelines that would be necessary to move this gas to existing customers. Gas-burning power plants could be built, but then very long electric transmission lines would be necessary, and those would also be too expensive. Natural gas is also a feed-stock for various plastics and nitrogen fertilizer, so another way to exploit the gas resource is to put a chemical factory at the gas field and then transport the materials produced, which embody the energy of the gas in a much more compact form. Aluminum ore is widely distributed, but aluminum production requires a lot of electricity. If aluminum ore is located near the gas field, then an aluminum plant can be built near the gas field. A paper mill is a similar way to use remote energy resources, allowing energy to be moved in a more compact product form.

At some point though customers want electricity itself rather than paper or aluminum. There are surely many windy places located on remote coastlines. One way to exploit this resource would be by transporting batteries back and forth between producers and consumers. Large ships could carry these batteries, loading up charged batteries from remote coastal areas, carrying them to ports near urban areas where they could be discharged into the power distribution grid, then carrying the depleted batteries back to the remote coastal areas to be recharged.

Today, battery power is generally too expensive to be used for bulk power. In most places batteries make better sense for providing ancillary services such as spinning reserves or power smoothing, though in remote areas battery power is already less expensive than alternatives. But as fossil fuel use becomes more limited and as battery technology improves, transmitting power by transporting batteries - "freighter grid" - seems to be a very practical way to keep the electricity flowing!

Saturday, April 2, 2016

I am spending a few days in Columbia, South Carolina, attending the conference Greening of Religions, sponsored by the Cherry Hill Seminary and the University of South Carolina. One attendee noticed a conflict between two views of science that have come up here. On the one hand, science is responsible for the disenchantment of the world that has opened the door to the environmental crisis that we are facing. On the other hand, scientific theories like ecology and evolution show the kinship of humans with other life on the planet, and the vital importance to human well-being of the well-being of those other forms of life. The observation of this conflict didn't elicit much response at the conference but it is central to my own thinking, my own calling.

In my presentation here, I had a few suggestions for ways to improve our thinking about and practice of science. I suggested that in evaluating the validity of a scientific theory, some kind of middle way is needed. Specifically I would suggest that validity should be something like a function from some parametric space describing a possible context of application of the theory, to some probability of satisfactory application. Roughly speaking, each theory has some large or small range of applicability.

My other suggestion was some kind of democratization of science. Sitting outside at lunch after the session at which I presented, I gave this idea a more concrete form. The way that science students really integrate science is through experiments, by repeating the classic experiments of their field. These repeated experiments are a lot like religious rituals, means to recreate core insights of a tradition. This parallel provides a direction that could work very well.

How about a science cathedral? It could consist of a set of chapels, each chapel dedicated to some classic science experiment. Each experiment could be repeated on a regular schedule, every day perhaps for some simple ones, once a year for complex ones. Each experiment could have a liturgy, starting with a description of the scientific question that initially gave rise to the experiment and some background for the people originally involved. Then as the steps of the experiment are executed, further liturgy would explain what is being done. Perhaps attendees could file by meters or flasks or whatever to observe voltages and colors etc. The ritual performers could wear costumes that reflect the time and place of the origin of the experiment.

Part of the program would also be to invite members of the community to deepen their participation, step by step. The idea is that people can really come to understand how the experiments work, to really share the insight about the world.

Of course some rituals would have to incorporate field work. Traditional religion involves parades and pilgrimages, so the parallel is still holding.

This vision is of course much like a science museum. But science museums tend to be rather dead. Science museums do tend more and more to provide hand-on opportunities. Perhaps a key difference is the focus of the science cathedral on the intent of experiments to induce particular insights. Once I was in a museum where Tibetan Buddhist musical instruments were on display, inside glass cases. Often enough I will play such instruments in ritual practice, e.g. at Kunzang Palchen Ling. It was really sad to see these instruments locked away in a case, away from their intended use. The museum could have provided a hands-on opportunity by leaving the instruments out of the case. But it is still a whole different experience to play a gyaling in a ritual, holding the intent in mind.

The more scientific practice gets isolated in ivory towers, the less it will be understood or trusted by the great number of people without the keys to those towers. It is vital for our future that we cultivate ways to bring scientific insights to the people, not just the technological products enabled by science.

Sunday, January 17, 2016

What is diversity? It is attributed to a set with many members when the members have characteristics that differ among those members. The most interesting case is where the set is a human community or organization with many human members. Humans have many characteristics, such as: gender, age, size, skin color, neuropsychological pattern, language competencies, religious belief, economic status, organization role, geographical history, ancestral characteristics, etc. Members of a diverse community share at least their membership in that community, while differing in at least some of their other characteristics.

Communities usually have structure beyond a simple boundary between members and non-members. People are generally members of many communities at the same time. People in one community will belong to a variety of other communities. That is part of the diversity of a community.

Communities encourage or discourage various actions of their members. Of course the community is composed of its members; it is the members’ actions which do the encouraging or discouraging. The vector of encouragement combines with the vector of diversity to drive the evolution of the community. This dynamic interplay is what creates our world.

Diversity can be accommodated in various ways; by providing services adapted to the particular needs of community members with their individual characteristics, and also by discouraging member actions that conflict with such accommodation. Of course to discourage is to fail to accommodate, so there are difficult conflicts to resolve. Should a conservative religious organization be permitted to restrict leadership roles to members of a particular gender?

A deeper question is the extent to which diversity should be cultivated, not merely accommodated. There are types of accidental or historical diversity which are not fundamental to the purpose of the community. It is relatively easy for a community to cultivate such diversity. For example, a physics research group might welcome members from many national backgrounds, and make a special effort to cherish that diversity, e.g. by celebrating the holidays of those many nations.

More challenging is the question of the extent to which diversity should be cultivated which touches the purpose of the community. A certain amount of diversity of opinion is essential to the advancement of science, but adherence to totally discredited theories would seem to be good reason for some sort of corrective action. But it is very difficult to know exactly where to draw this line. More than one discredited theory has later triumphed.
Another sort of diversity involves frameworks which are more incommensurable than contradictory. For example, an art department may face a decision whether to support both realistic and abstract artists, who may find each other’s aesthetic judgements almost unintelligible.

Communities throughout history have managed diversity in various ways. There are often roles that are rigidly assigned based on a member’s characteristics such as gender or ancestry. Beyond this sort of functional diversity, any accommodation is most often seen as a temporary stage on the way to ultimate uniformity. This general trend has led to the globalization of today.
Globalization has proved a mixed blessing and by now seems self-limiting, having exhausted the planetary resources required for its own sustenance. Just as fragmentation of a population creates the opportunity for biological speciation, the collapse of global commerce will make room for the reemergence of cultural diversity.

Did the suppression of diversity by globalization participate in the failure of globalization? If we are to learn from that failure, we need to find new ways to cultivate and cherish diversity!

Thursday, January 14, 2016

Prices are a way of evaluating options. Given two floor mops, if mop A works twice as well as mop B, then an efficient market ought to mark mop A at twice the price of mop B.

We could put prices on sports teams in just this way. For example, gambling pools should, given enough information, be able to put proper odds on each team in a match or a tournament.

But a tournament is more interesting than a match. Suppose we have a tournament with four teams playing, A, B, C, and D. We might be able to predict with reasonable accuracy which team will win in a match between any particular pair. But that doesn't let us assign any kind of price or value to each team. A might always win against B, and B against C, but it might happen that C always wins against A! In a four way single elimination tournament, the final champion team will depend on the way the tournament is arranged: on how the teams are paired up in the first round, etc.

Given a tournament structure, accurate pairwise odds will map cleanly to accurate odds on the eventual champion. But different tournament structure can easily give a different most likely champion. Teams don't have a universal value, but only relative to the tournament structure.

Decision making in general is a matter of evaluating options. Accurate evaluation depends on understanding the context in which the various contemplated actions will unfold. This is just a manifestation of the unity of emptiness and interdependent origination. The value of a thing is not inherent in the thing but rather is a property of how the thing is situated in its environment.

Sunday, November 15, 2015

Somehow I am rather fascinated by how bicycle tires work. Probably one reason is that I just enjoy how things that seem very simple before being analyzed, turn out to be rather complicated as they are studied more carefully.

A tire on a bicycle or car etc. must support the vehicle, which involves forces parallel to the radius of the wheel, and also transmit tangential forces from engine torque or braking. To support the vehicle, the tire must push down on the ground with its share of the weight of the vehicle, and also push up on the rim with the same force. If one lifts the vehicle up off the wheels and then gradually lowers it back onto the wheels, the tires will deform from a uniform round shape to a flattened shape.

How much deformation will occur, how deeply will the vehicle sink after the tires first touch the ground until the tires are supporting all the weight? This amount of deformation must satisfy two conditions: the area of the resulting contact patch, times the air pressure inside the tire, must combine to produce sufficient force against the ground to support the vehicles weight. But the deformation must also provide enough net force against the rim to support that same weight, as outlined in Jobst Brandt's article.

At first thought, the deformation looks like a single variable, determined by the height of the rim off the ground. The lower the rim, the more the tire is deformed. But how can a single variable satisfy two conditions, without an impossibly unlikely stroke of luck?

Thinking about this a bit more, it seems that at a given height of the rim off the ground, a range of tire shapes is possible. This range of shapes provides the flexibility for both constraints to be satisfied.

In the following diagram, C is the half width of the tire carcass, i.e. half the distance from bead to bead along the carcass. Where the carcass is not touching the ground it is assumed to have a circular shape. This is still a very simplified picture, because there will be interactions between the neighboring cross-sections of the tire. A very large radius of curvature will permit a large contact patch, increasing the force of the tire against the ground but reducing the lift of the rim. A small radius of curvature will reduce the contact patch, so the force of the tire against the ground is smaller but the lift of the rim is greater. The actual shape should be where the two forces are equal.

About Me

After receiving a Master's degree in physics in 1981, I worked in the semiconductor industry, developing software that engineers used to check and optimize designs of digital circuits.
Having practiced Buddhism since 1980, I completed a traditional three year meditation retreat at Karme Ling Retreat Center, New York, in 1996.