We’ll also need queries for all the cases in between if we continue with this design to cover assets allocated to departments and products allocated to cost centers. Each new level of the query adds to a combinatorial explosion and begs many questions about the design. What happens if we change the allocation rules? What if a product can be allocated directly to a cost center instead of passing through a department? Are the queries efficient as the amount of data in the system increases? Is the system testable? To make matters worse, a real-world allocation model would also contain many more entities and associative entities.
The Delicate Sound of a Combinatorial Explosion…
We’ve introduced the problem and sketched out a rudimentary solution in just a few pages, but imagine how a real system like this might evolve over an extended period of months or even years with a team of people involved.

One such early system, the Logic Theorist, was able to prove most of the theorems in the second chapter of Whitehead and Russell’s Principia Mathematica, and even came up with one proof that was much more elegant than the original, thereby debunking the notion that machines could “only think numerically” and showing that machines were also able to do deduction and to invent logical proofs.13 A follow-up program, the General Problem Solver, could in principle solve a wide range of formally specified problems.14 Programs that could solve calculus problems typical of first-year college courses, visual analogy problems of the type that appear in some IQ tests, and simple verbal algebra problems were also written.15 The Shakey robot (so named because of its tendency to tremble during operation) demonstrated how logical reasoning could be integrated with perception and used to plan and control physical activity.16 The ELIZA program showed how a computer could impersonate a Rogerian psychotherapist.17 In the mid-seventies, the program SHRDLU showed how a simulated robotic arm in a simulated world of geometric blocks could follow instructions and answer questions in English that were typed in by a user.18 In later decades, systems would be created that demonstrated that machines could compose music in the style of various classical composers, outperform junior doctors in certain clinical diagnostic tasks, drive cars autonomously, and make patentable inventions.19 There has even been an AI that cracked original jokes.20 (Not that its level of humor was high—“What do you get when you cross an optic with a mental object? An eye-dea”—but children reportedly found its puns consistently entertaining.)
The methods that produced successes in the early demonstration systems often proved difficult to extend to a wider variety of problems or to harder problem instances. One reason for this is the “combinatorial explosion” of possibilities that must be explored by methods that rely on something like exhaustive search. Such methods work well for simple instances of a problem, but fail when things get a bit more complicated. For instance, to prove a theorem that has a 5-line long proof in a deduction system with one inference rule and 5 axioms, one could simply enumerate the 3,125 possible combinations and check each one to see if it delivers the intended conclusion.

…

But as the task becomes more difficult, the method of exhaustive search soon runs into trouble. Proving a theorem with a 50-line proof does not take ten times longer than proving a theorem that has a 5-line proof: rather, if one uses exhaustive search, it requires combing through 550 ≈ 8.9 × 1034 possible sequences—which is computationally infeasible even with the fastest supercomputers.
To overcome the combinatorial explosion, one needs algorithms that exploit structure in the target domain and take advantage of prior knowledge by using heuristic search, planning, and flexible abstract representations—capabilities that were poorly developed in the early AI systems. The performance of these early systems also suffered because of poor methods for handling uncertainty, reliance on brittle and ungrounded symbolic representations, data scarcity, and severe hardware limitations on memory capacity and processor speed.

…

In practice, however, getting evolutionary methods to work well requires skill and ingenuity, particularly in devising a good representational format. Without an efficient way to encode candidate solutions (a genetic language that matches latent structure in the target domain), evolutionary search tends to meander endlessly in a vast search space or get stuck at a local optimum. Even if a good representational format is found, evolution is computationally demanding and is often defeated by the combinatorial explosion.
Neural networks and genetic algorithms are examples of methods that stimulated excitement in the 1990s by appearing to offer alternatives to the stagnating GOFAI paradigm. But the intention here is not to sing the praises of these two methods or to elevate them above the many other techniques in machine learning. In fact, one of the major theoretical developments of the past twenty years has been a clearer realization of how superficially disparate techniques can be understood as special cases within a common mathematical framework.

When the number of things an algorithm needs to do grows exponentially with the size of its input, computer scientists call it a combinatorial explosion and run for cover. In machine learning, the number of possible instances of a concept is an exponential function of the number of attributes: if the attributes are Boolean, each new attribute doubles the number of possible instances by taking each previous instance and extending it with a yes or no for that attribute. In turn, the number of possible concepts is an exponential function of the number of possible instances: since a concept labels each instance as positive or negative, adding an instance doubles the number of possible concepts. As a result, the number of concepts is an exponential function of an exponential function of the number of attributes! In other words, machine learning is a combinatorial explosion of combinatorial explosions. Perhaps we should just give up and not waste our time on such a hopeless problem?

…

It helps that, if the goal is to cure cancer, we don’t necessarily need to understand all the details of how tumor cells work, only enough to disable them without harming normal cells. In Chapter 6, we’ll also see how to orient learning toward the goal while steering clear of the things we don’t know and don’t need to know.
More immediately, we know we can use inverse deduction to infer the structure of the cell’s networks from data and previous knowledge, but there’s a combinatorial explosion of ways to apply it, and we need a strategy. Since metabolic networks were designed by evolution, perhaps simulating it in our learning algorithms is the way to go. In the next chapter, we’ll see how to do just that.
Deeper into the brain
When backprop first hit the streets, connectionists had visions of quickly learning larger and larger networks until, hardware permitting, they amounted to artificial brains.

…

In the days before computers, a police artist could quickly put together a portrait of a suspect from eyewitness interviews by selecting a mouth from a set of paper strips depicting typical mouth shapes and doing the same for the eyes, nose, chin, and so on. With only ten building blocks and ten options for each, this system would allow for ten billion different faces, more than there are people on Earth.
In machine learning, as elsewhere in computer science, there’s nothing better than getting such a combinatorial explosion to work for you instead of against you. What’s clever about genetic algorithms is that each string implicitly contains an exponential number of building blocks, known as schemas, and so the search is a lot more efficient than it seems. This is because every subset of the string’s bits is a schema, representing some potentially fit combination of properties, and a string has an exponential number of subsets.

He was probably playing ball with one of his sons. He saw the ball rolling on a curved surface ...
AND CONCLUDED—EUREKA—SPACE IS CURVED!
CHAPTER FIVE
CONTEXT AND KNOWLEDGE
PUTTING IT ALL TOGETHER
So how well have we done? Many apparently difficult problems do yield to the application of a few simple formulas. The recursive formula is a master at analyzing problems that display inherent combinatorial explosion, ranging from the playing of board games to proving mathematical theorems. Neural nets and related self-organizing paradigms emulate our pattern-recognition faculties, and do a fine job of discerning such diverse phenomena as human speech, letter shapes, visual objects, faces, fingerprints, and land terrain images. Evolutionary algorithms are effective at analyzing complex problems, ranging from making financial investment decisions to optimizing industrial processes, in which the number of variables is too great for precise analytic solutions.

…

With little fingers and computation, nanomachines would have in their Lilliputian world what people have in the big world: intelligence and the ability to manipulate their environment. Then these little machines could build replicas of themselves, achieving the field’s key objective.
The reason that self-replication is important is that it is too expensive to build these tiny machines one at a time. To be effective, nanometer-sized machines need to come in the trillions. The only way to achieve this economically is through combinatorial explosion: let the machines build themselves.
Drexler, Merkle (a coinventor of public key encryption, the primary method of encrypting messages), and others have convincingly described how such a self-replicating nanorobot—nanobot—could be constructed. The trick is to provide the nanobot with sufficiently flexible manipulators—arms and hands—so that it is capable of building a copy of itself. It needs some means for mobility so that it can find the requisite raw materials.

…

I do this not to belabor the issue of chess playing, but rather because it illustrates a clear contrast. Raj Reddy, Carnegie Mellon University’s AI guru, cites studies of chess as playing the same role in artificial intelligence that studies of E. coli play in biology: an ideal laboratory for studying fundamental questions.5 Computers use their extreme speed to analyze the vast combinations created by the combinatorial explosion of moves and countermoves. While chess programs may use a few other tricks (such as storing the openings of all master chess games in this century and precomputing endgames), they essentially rely on their combination of speed and precision. In comparison, humans, even chess masters, are extremely slow and imprecise. So we precompute all of our chess moves. That’s why it takes so long to become a chess master, or the master of any pursuit.

Rather than digging though a hierarchy yourself, just ask for what you need directly:
We added a method to Selection to get the time zone on our behalf: the plotting routine doesn't care whether the time zone comes from the Recorder directly, from some contained object within Recorder, or whether Selection makes up a different time zone entirely. The selection routine, in turn, should probably just ask the recorder for its time zone, leaving it up to the recorder to get it from its contained Location object.
Traversing relationships between objects directly can quickly lead to a combinatorial explosion[1] of dependency relationships. You can see symptoms of this phenomenon in a number of ways:
[1] If n objects all know about each other, then a change to just one object can result in the other n – 1 objects needing changes.
Large C or C++ projects where the command to link a unit test is longer than the test program itself
"Simple" changes to one module that propagate through unrelated modules in the system
Developers who are afraid to change code because they aren't sure what might be affected
Systems with many unnecessary dependencies are very hard (and expensive) to maintain, and tend to be highly unstable.

…

Anyone can ask a witness questions in the pursuit of the case, post the transcript, and move that witness to another area of the blackboard, where he might respond differently (if you allow the witness to read the blackboard too).
A big advantage of systems such as these is that you have a single, consistent interface to the blackboard. When building a conventional distributed application, you can spend a great deal of time crafting unique API calls for every distributed transaction and interaction in the system. With the combinatorial explosion of interfaces and interactions, the project can quickly become a nightmare.
Organizing Your Blackboard
When the detectives work on large cases, the blackboard may become cluttered, and it may become difficult to locate data on the board. The solution is to partition the blackboard and start to organize the data on the blackboard somehow.
Different software systems handle this partitioning in different ways; some use fairly flat zones or interest groups, while others adopt a more hierarchical treelike structure.

Here’s a simple proof: suppose the people in a small company write down their work tasks— one task per card. If there were only 52 tasks in the company, as many as in a standard deck of cards, then there would be 52! different ways to arrange these tasks.8 This is far more than the number of grains of rice on the second 32 squares of a chessboard or even a second or third full chessboard. Combinatorial explosion is one of the few mathematical functions that outgrows an exponential trend. And that means that combinatorial innovation is the best way for human ingenuity to stay in the race with Moore’s Law.
Most of the combinations may be no better than what we already have, but some surely will be, and a few will be “home runs” that are vast improvements. The trick is finding the ones that make a positive difference.

This meant that the machine ought to be able to solve any problem using first principles and experience derived from learning. Early models of general-solving were built, but could not scale up. Systems could solve one general problem but not any general problem.6 Algorithms that searched data in order to make general inferences failed quickly because of something called ‘combinatorial explosion’: there were simply too many interrelated parameters and variables to calculate after a number of steps. An approach called ‘heuristics’ tried to solve the combinatorial explosion problem by ‘pruning’ branches off the tree of the search executed by any given algorithm; but even this was shown to be of limited value. In the end, AI researchers came to realise that problems such as the recognition of faces or objects required ‘common sense’ reasoning, which was fiendishly difficult to code.

End-to-End/Integration Tests
As applications grow (and they tend to, really fast, before you even realize it), testing whether they work as intended manually just doesn’t cut it anymore. After all, every time you add a new feature, you have to not only verify that the new feature works, but also that your old features still work, and that there are no bugs or regressions. If you start adding multiple browsers, you can easily see how this can become a combinatorial explosion!
AngularJS tries to ease that by providing a Scenario Runner that simulates user interactions with your application.
The Scenario Runner allows you to describe your application in a Jasmine-like syntax. Just as with the unit tests before, we will have a series of describes (for the feature), and individual its (to describe each individual functionality of the feature). As always, you can have some common actions, to be performed before and after each spec (as we call a test).

Modularity embodies the principle of abstraction, allowing a certain amount of managed complexity through compartmentalization.
Unfortunately, understanding individual modules—or building them to begin with—doesn’t always yield the kinds of expected behaviors we might hope for. If each module has multiple inputs and multiple outputs, when they are connected the resulting behavior can still be difficult to comprehend or to predict. We often end up getting a combinatorial explosion of interactions: so many different potential interactions that the number of combinations balloons beyond our ability to handle them all. For example, if each module in a system has a total of six distinct inputs and outputs, and we have only ten modules, there are more ways of connecting all these modules together than there are stars in the universe.
In some realms that can be heavily regulated, such as finance or corporate structures, our dreams of increasing modularity or finding the ideal level of interoperability might work.

Ultimately, this style of AI came to be called the symbolic systems approach.
But the early AI researchers quickly ran into a problem: the computers didn’t seem to be powerful enough to do very many interesting tasks. Formalists who studied the arcane field of theory of computation understood that building faster computers could not address this problem. No matter how speedy the computer, it could never tame what was called the “combinatorial explosion.” Solving real-world problems through step-wise analysis had this nasty habit of running out of steam the same way pressure in a city’s water supply drops when vast new tracts of land are filled with housing developments.
Imagine finding the quickest driving route from San Francisco to New York by measuring each and every way you could possibly go; your trip would never get started. And even today, that’s not how contemporary mapping applications give you driving instructions, which is why you may notice that they don’t always take the most efficient route.

Perhaps, then, one could dedicate a node to each combination of concepts and roles. There would be a baby-eats-slug node and a slug-eats-baby node. The brain contains a massive number of neurons, one might think, so why not do it that way? One reason not to is that there is massive and then there is really massive. The number of combinations grows exponentially with their allowable size, setting off a combinatorial explosion whose numbers surpass even our most generous guess of the brain’s capacity. According to legend, the vizier Sissa Ben Dahir claimed a humble reward from King Shirham of India for inventing the game of chess. All he asked for was a grain of wheat to be placed on the first square of a chessboard, two grains of wheat on the second, four on the third, and so on. Well before they reached the sixty-fourth square the king discovered he had unwittingly committed all the wheat in his kingdom.

…

One cost is space: the hardware to hold the information. The limitation is all too clear to microcomputer owners deciding whether to invest in more RAM. Of course the brain, unlike a computer, comes with vast amounts of parallel hardware for storage. Sometimes theorists infer that the brain can store all contingencies in advance and that thought can be reduced to one-step pattern recognition. But the mathematics of a combinatorial explosion bring to mind the old slogan of MTV: Too much is never enough. Simple calculations show that the number of humanly graspable sentences, sentence meanings, chess games, melodies, seeable objects, and so on can exceed the number of particles in the universe. For example, there are thirty to thirty-five possible moves at each point in a chess game, each of which can be followed by thirty to thirty-five responses, defining about a thousand complete turns.

…

One is that memory cannot hold all the events that bombard our senses; by storing only their categories, we cut down on the load. But the brain, with its trillion synapses, hardly seems short of storage space. It’s reasonable to say that entities cannot fit in memory when the entities are combinatorial—English sentences, chess games, all shapes in all colors and sizes at all locations—because the numbers from combinatorial explosions can exceed the number of particles in the universe and overwhelm even the most generous reckoning of the brain’s capacity. But people live for a paltry two billion seconds, and there is no known reason why the brain could not record every object and event we experience if it had to. Also, we often remember both a category and its members, such as months, family members, continents, and baseball teams, so the category adds to the memory load.

However, once Backend B determines that the request to the DB Frontend can’t be served (for example, because the request has already been attempted and rejected three times), Backend B has to return to Backend A either an “overloaded; don’t retry” error or a degraded response (assuming that it can produce some moderately useful response even when its request to the DB Frontend failed).
Backend A has exactly the same options for the request it received from the Frontend, and proceeds accordingly.
Figure 21-2. A stack of dependencies
The key point is that a failed request from the DB Frontend should only be retried by Backend B, the layer immediately above it. If multiple layers retried, we’d have a combinatorial explosion.
Load from Connections
The load associated with connections is one last factor worth mentioning. We sometimes only take into account load at the backends that is caused directly by the requests they receive (which is one of the problems with approaches that model load based upon queries per second). However, doing so overlooks the CPU and memory costs of maintaining a large pool of connections or the cost of a fast rate of churn of connections.

…

By hosting the code that supports new functionality in the client application before we activate that feature, we greatly reduce the risk associated with a launch. Releasing a new version becomes much easier if we don’t need to maintain parallel release tracks for a version with the new functionality versus without the functionality. This holds particularly true if we’re not dealing with a single piece of new functionality, but a set of independent features that might be released on different schedules, which would necessitate maintaining a combinatorial explosion of different versions.
Having this sort of dormant functionality also makes aborting launches easier when adverse effects are discovered during a rollout. In such cases, we can simply switch the feature off, iterate, and release an updated version of the app. Without this type of client configuration, we would have to provide a new version of the app without the feature, and update the app on all users’ phones.

The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise
by
Martin L. Abbott,
Michael T. Fisher

After twenty and a half chapters, you probably can sense where we are going.
319
320
C HAPTER 21
C REATING F AULT I SOLATIVE A RCHITECTURAL S TRUCTURES
You should implement just the right amount of fault isolation in your system to
generate a positive shareholder return. “OK, thanks, how about telling me how to do
that?” you might ask.
The answer, unfortunately, is going to depend on your particular needs, the rate of
growth and unavailability and causes of unavailability in your system, customer
expectation with respect to availability, contractual availability commitments, and a
whole host of things that result in a combinatorial explosion, which make it impossible for us to describe for you what you need to do in your environment.
That said, there are some simple rules to apply to increase your scalability and
availability. We present some of the most useful here to help you in your fault isolation endeavors.
Approach 1: Swim Lane the Money-Maker
Whatever you do, always make sure that the thing that is most closely related to
making money is appropriately isolated from the failures and demand limitations of
other systems.

…

is to answer build systems to answer “Where is the problem?” Often, these systems
are out-of-the-box third-party or open source solutions that you install on systems to
monitor resource utilization. Some application monitors might also be employed.
The data collected by these systems help inform other processes such as our capacity
planning process and problem resolution process. Care must be taken to avoid a
combinatorial explosion of data, as that data is costly and the value of immense
amounts of old data is very low.
Finally, we move to answer the question of “What is the problem?” This very
often requires us to rely heavily on our architectural principal Design to Be Monitored. Here, we are monitoring individual components, and often these are proprietary applications for which we are responsible. Again, the concerns of data
explosion are present, and we must fight to ensure that we are keeping the right data
and not diluting shareholder value.

It was easy enough to understand why the arbiters of the system subdivided Motorized Land Vehicles (629.2) into several categories, but here in the 629.22s, where the books on automobiles were, you could see the planners' deficiencies. Automobiles divided into dozens of major subcategories (taxis and limousines, buses, light trucks, cans, lorries, tractor trailers, campers, motorcycles, racing cars, and so on), then ramified into a combinatorial explosion of sub-sub-sub categories. There were Dewey numbers on some of the automotive book spines that had twenty digits or more after the decimal, an entire Dewey Decimal system hidden between 629.2 and 629.3.
To the librarian, this shelf-reading looked like your garden-variety screwing around, but what really made her nervous were Alan's excursions through the card catalogue, which required constant tending to replace the cards that errant patrons made unauthorized reorderings of.

In the early 1950s, machines were taught how to play checkers and could soon beat respectable amateurs.28 In January 1956, Herbert Simon returned to teaching his class and told his students, “Over Christmas, Al Newell and I invented a thinking machine.” Three years later, they created a computer program modestly called the “General Problem Solver,” which was designed to solve, in principle, any logic problem that could be described by a set of formal rules. It worked well on simple problems like Tic-Tac-Toe or the slightly harder Tower of Hanoi puzzle, although it didn’t scale up to most real-world problems because of the combinatorial explosion of possible options to consider.
Cheered by their early successes and those of other artificial intelligence pioneers like Marvin Minsky, John McCarthy and Claude Shannon, and Simon and Newell were quite optimistic about how rapidly machines would master human skills, predicting in 1958 that a digital computer would be the world chess champion by 1968.29 In 1965, Simon went so far as to predict, “machines will be capable, within twenty years, of doing any work a man can do.”30
Simon won the Nobel Prize in Economics in 1978, but he was wrong about chess, not to mention all the other tasks that humans can do.

* * *
When the limit is reached, it jars Huw’s self-sense like a long fall to a hard floor, every virtual bone and joint buckling and bending, spine compressing, jaws clacking together. It has been going so well, the end in sight, the time running fast but Huw and father-thing and ambassador running faster, and now—
“I’m stuck,” Huw says.
“Not a problem. We could play this game forever—the number of variables gives rise to such a huge combinatorial explosion that there isn’t enough mass in this universe to explore all the possible states. The objective of the exercise was to procure a representative sample of moves, played by a proficient emissary, and we’ve now delivered that.”
“Hey, wait a minute! ...” Huw’s stomach does a backflip, followed by a triple somersault, and is preparing to unicycle across a tightrope across the Niagara Falls while carrying a drunken hippo on his back: “You mean that was it?”

pages: 315words: 92,151

Ten Billion Tomorrows: How Science Fiction Technology Became Reality and Shapes the Future
by
Brian Clegg

These memories are inserted into the doll’s brain using a chair with some kind of remote electromagnetic stimulus, transferring information stored on what appear to be computer hard drives (referred to in the show as “wedges”).
The Dollhouse approach, which bears a resemblance to a whole-brain version of the learning process in The Matrix, seems to underestimate the complexity of what’s going on inside a human skull. The number of potential connections of all the neurons in the brain provides a combinatorial explosion that would require every atom in the universe if we were to try to map out every possible combination. Of course, if the brain can store the data, so can an electronic device, but even in the actual connections in any particular brain, we are talking far more storage than is feasible in a compact device at the moment.
In a sense, the Dollhouse approach is more sensible than that in The Matrix, as it doesn’t require the programmer to pinpoint just where the expertise is recorded in order to be able to reproduce it.

But we could move toward AGI a lot faster if there were a nicer
programming language with anywhere near the same scalability as C++. Moving on:
This is not quite a bottleneck, but I would say that if the Novamente system is going to
fail to achieve AGI, which I think is quite unlikely, then it would be because of a
failure in the aspect of the design wherein the different parts of the system all interact
with each other dynamically, to stop each other from coming to horrible combinatorial
explosions. A difficult thing is that AI is all about emergence and synergy, so that in
order to really test your system, you have test all the parts, put them together in
combination, and look at the emergence effects. And that’s actually hard. The most
basic bottleneck is that you are building an emergent system that has to be understood
and tested as a whole, rather than a system that can be implemented and tested piece by
piece.

The resulting rationalization of production processes and standardization of components had reduced manufacturing costs to such an extent that IBM had no effective competition in punched-card machines at all.
The biggest problem, however, was not in hardware but in software. Because the number of software packages IBM offered to its customers was constantly increasing, the proliferation of computer models created a nasty gearing effect: given m different computer models, each requiring n different software packages, a total of m × n programs had to be developed and supported. This was a combinatorial explosion that threatened to overwhelm IBM at some point in the not-too-distant future.
Just as great a problem was that of the software written by IBM’s customers. Because computers were so narrowly targeted at a specific market niche, it was not possible for a company to expand its computer system in size by more than a factor of about two without changing to a different computer model. If this was done, then all the user’s applications had to be reprogrammed.

The success of such a network may be evaluated by examining the number of congressmen surviving an attack and comparing such number to the number of congressmen able to communicate with one another and vote via the communications network. Such an example is, of course, farfetched but not completely without utility.”51
The more alternative connection paths there are between the nodes of a communications net, the more resistant it is to damage from within or without. But there is a combinatorial explosion working the other way: the more you increase the connectivity, the more intelligence and memory is required to route messages efficiently through the net. In a conventional circuit-switched communications network, such as the telephone system, a central switching authority establishes an unbroken connection for every communication, mediating possible conflicts with other connections being made at the same time.

The problematic before us then actually would become the Keynesian (or Olympian) one of learning to live “wisely and agreeably and well” under conditions of absolute and universal freedom from want.
In the end, what is it that people want from these technologies? As near as I can tell, a few want just exactly what some have always wanted from other human beings: a cheap, reliable, docile labor force. Others, though, are seeking something less tangible: sense, meaning, order, a ward against uncertainty.
They’re looking for something that might help them master the combinatorial explosion of possibility on a planet where nine billion people are continually knitting their own world-lines; for just a little reassurance, in a world populated by so many conscious actors that it often feels like it’s spinning out of anyone’s control. These are impulses I think most of us can relate to, and intuitively react to with some sympathy. And it’s this class of desires that I think we should keep in mind as we explore the mechanics of machine learning, automated pattern recognition and decision-making.

Similarly, Amdi had probably said that “someone” had betrayed “something”—but the software had generated the particular nouns from a long list of suspects.
It was amazing that Jefri had even made it onto that list, much less coming out at the top. So what logic had put him there? She drilled down through the program’s reasoning, into depths she had never visited. As suspected, the “why I chose ‘this’ over ‘that’” led to a combinatorial explosion. She could spend centuries studying this—and get nowhere.
Ravna leaned back in her chair, turning her head this way and that, trying to get the stress out her neck. What am I missing? Of course, the program could simply be broken. Oobii’s emergency automation was specially designed to run in the Slow Zone, but the surveillance program was a bit of purely Beyonder software, not on the ship’s Usables manifest.

It also specifies many of the other features of Avro that implementations should support. One area that the specification does not rule on, however, is APIs: implementations have complete latitude in the API they expose for working with Avro data, since each one is necessarily language-specific. The fact that there is only one binary format is significant, since it means the barrier for implementing a new language binding is lower, and avoids the problem of a combinatorial explosion of languages and formats, which would harm interoperability.
Avro has rich schema resolution capabilities. Within certain carefully defined constraints, the schema used to read data need not be identical to the schema that was used to write the data. This is the mechanism by which Avro supports schema evolution. For example, a new, optional field may be added to a record by declaring it in the schema used to read the old data.

The method by which his
counterfactual canal system was derived is not fully explained in the Appendix,
and his estimate of its performance is based on guesswork (p. 38).
Network optimization cannot be eVected by linear programming, as Fogel
mistakenly suggests. Making a connection between two locations involves a
binary decision: the two locations are either connected or they are not.
Network optimization is therefore an integer programming problem, and problems of this kind encounter combinatorial explosion: the number of possible
network structures increases at an accelerating rate as the number of locations to
be served rises.
An additional complexity arises from the fact that the optimal location for a
railway junction may be in the middle of the countryside rather than at a town.
Constraining all junctions to be at towns may reduce the performance of a
network quite considerably. As indicated above, the actual network made extensive use of rural junctions, at places such as Crewe, Swindon, and Eastleigh,
and lesser-known centres such as Evercreech, Broom, and Melton Constable.

Finally, there are limiting factors to fast growth, such as economic returns (if very few can afford a new technology it will be very expensive and not as profitable as a mass market technology), constraints on development speed (even advanced manufacturing processes need time for reconfiguration, development, and testing), human adaptability, and especially the need for knowledge. As the amount of knowledge grows, it becomes harder and harder to keep up and to get an overview, necessitating specialization. Even if information technologies can help somewhat, the basic problem remains, with the combinatorial explosion of possible combinations of different fields. This means that a development project might need specialists in many areas, which in turns means that there is a smaller size of group able to do the development. In turn, this means that it is very hard for a small group to get far ahead of everybody else in all areas, simply because it will not have the necessary know-how in all necessary areas.

Our goal is to develop algorithms for checking membership in the least
and greatest fixed points of a generating function F. The basic steps in these
algorithms will involve “running F backwards”: to check membership for an
element x, we need to ask how x could have been generated by F. The advantage of an invertible F is that there is at most one way to generate a given
x. For a non-invertible F, elements can be generated in multiple ways, leading to a combinatorial explosion in the number of paths that the algorithm
must explore. From now on, we restrict our attention to invertible generating
functions.
21.5.3
Definition: An element x is F-supported if support F (x)↓; otherwise, x is Funsupported. An F-supported element is called F-ground if support F (x) =
∅.
Note that an unsupported element x does not appear in F(X) for any X,
while a ground x is in F(X) for every X.

For example, there is a JavascriptExecutor interface that provides the ability to execute arbitrary chunks of Javascript in the context of the current page. A successful cast of a WebDriver instance to that interface indicates that you can expect the methods on it to work.
Figure 16.1: Accountant and Stockist Depend on Shop
Figure 16.2: Shop Implements HasBalance and Stockable
16.4.2. Dealing with the Combinatorial Explosion
One of the first things that is apparent from a moment's thought about the wide range of browsers and languages that WebDriver supports is that unless care is taken it would quickly face an escalating cost of maintenance. With X browsers and Y languages, it would be very easy to fall into the trap of maintaining X×Y implementations.
Reducing the number of languages that WebDriver supports would be one way to reduce this cost, but we don't want to go down this route for two reasons.

I do not at all intend this to be a shocking indictment, just a reminder of something quite obvious: no remotely compelling system of ethics has ever been made computationally tractable, even indirectly, for real-world moral problems. So, even though there has been no dearth of utilitarian (and Kantian, and contractarian, etc.) arguments in favor of particular policies, institutions, practices, and acts, these have all been heavily hedged with ceteris paribus clauses and plausibility claims about their idealizing assumptions. These hedges are designed to overcome the combinatorial explosion of calculation that threatens if one actually attempts — as theory says one must — to consider all things. And as arguments — not derivations — they have all been controversial (which is not to say that none of them could be sound in the last analysis).
To get a better sense of the difficulties that contribute to actual moral reasoning, let us give ourselves a smallish moral problem and see what we do with it.

Consequently, we could simply declare only one version of the equality
operator for ccoom
mpplleexx:
bbooooll ooppeerraattoorr==(ccoom
mpplleexx,ccoom
mpplleexx);
vvooiidd ff(ccoom
mpplleexx
{
xx==yy;
xx==33;
33==yy;
}
xx, ccoom
mpplleexx yy)
// means operator==(x,y)
// means operator==(x,complex(3))
// means operator==(complex(3),y)
There can be reasons for preferring to define separate functions. For example, in some cases the
conversion can impose overheads, and in other cases, a simpler algorithm can be used for specific
argument types. Where such issues are not significant, relying on conversions and providing only
the most general variant of a function – plus possibly a few critical variants – contains the combinatorial explosion of variants that can arise from mixed-mode arithmetic.
Where several variants of a function or an operator exist, the compiler must pick ‘‘the right’’
variant based on the argument types and the available (standard and user-defined) conversions.
Unless a best match exists, an expression is ambiguous and is an error (see §7.4).
An object constructed by explicit or implicit use of a constructor is automatic and will be
destroyed at the first opportunity (see §10.4.10).