History and Economics

Part1: The Digital FutureI was challenged to forecast changes in economic systems based on both my knowledge of spatial economic systems and on my experiences with computers, data networking, and automation. What I have come up with is a four par…

[Sidebar: While starting with a sidebar is unusual to say the least, I felt it necessary because this post has gotten itself completely out of hand in terms of length. But then when I thought about it attempting to synthesis history and future history into a single post should be long. Also forgive me for injecting myself into many of the sidebars.]

It’s weird when you think about it, the historians, archeologists, and other social scientists discuss the ages of culture as the early stone age, the late stone age, the copper age, the bronze age, the iron age, and so on, as the way to name the cultures of humans.

I think there is a better way to understand the ages of humanity. It is through humanity’s creation and dissemination of information and knowledge. All the current ages of humanity are really based on these four transformations of data, information, and knowledge.

Knowledge Generation before Speech

Prior to the development of speech there were two ways that life created “information and knowledge”. The first started with the start of life itself on earth. It was the combination of changes in the DNA chains and natural selection by the environment. [Sidebar: This is still the foundation on which all other forms of knowledge creation is built; that is, except for a relatively small number of differences, human DNA and tree DNA is the same.] One form of this knowledge creation and communications is instinctual behaviors. Additionally, this forms the basis for the concept of Environmental Determinism.

The second way “information and knowledge” was created was through the evolution of “monkey-see-monkey-do”; that is, through “open instincts”. An open instinct is one that allows the life-form, generally, animals to observe its surroundings, orientthe observations (food, a place to hide, a threat), decide on an action, and act. [Sidebar: Oh shoot, there goes Boyd’s OODA loop again.] No longer does DNA only decide. To make the decision the life-form must learn to observe, choosing which input is data and which is noise, and must create a mental model in order to orient the observed data. Both of these require the ability and time to learn. This learn-by-doing forms the basis for the concept of “Possiblism”.

The Age of Speech (~350,000 to 80,000 BC)

As noted, the learn-by-doing (monkey-see-monkey-do) process requires both the ability and time to learn. According to studies of DNA and archeology, the average man would live about 20 years. [Sidebar: note that even today a boy becomes a man at age 13 in the Jewish religion. This means that thousands of years ago, the man would have 7 years to procreate before dying. Today, that 13 year old kid is not even in high school.] So there is a time when the young need adult protection to learn. This may be a few months as in the case of deer, or a couple of years.

The problem with learn-by-doing is that it requires both the ability to learn and the time to learn. Because DNA evolution continues with each biological experiment—each child—there will be significant variations in the ability to learn. So, sometimes knowledge would be lost by the inability of a child to learn-by-doing. Other times, the parent/coach could die unexpectedly early, so that there was insufficient time for knowledge transfer. Either way, information and knowledge was lost.

At some point between approximately 350,000 BC and 80,000 BC, possibly in several steps a new hopeful dragon (to use Carl Sagan’s term) was born. This hopeful dragon had some ability to articulate and an open instinct to most probably create a noun (the name of a thing) and/or a verb (the name of an action). This gave birth to language. And language allowed for learning-by-listening, which turned out to be a competitive advantage for the groups and tribes that had it when compared with those that didn’t.

This is Learning-by-listening resolves the problem of loosing knowledge gained by previous generations. As language evolved it enabled humans to communicate increasingly abstract concepts to others. Initially, (for 100,000 years or so) much of this knowledge was communicated by statement of observations and commands, some of it evolved (likely at a much later date) into stories, odes, epic tales, sagas, and myths. These tales encapsulated the knowledge of prior generations; the tribal or cultural memory.

Toward the end of the period (~80,000 BC) when speech and language were born Homo sapiens started migrating from Africa. Some researcher believe this was due to the competitive advantage of speech and language, that is, better methods of knowledge accretion and communication when compared with other animals.

The Age of Speech allowed for the accumulation of data, information, and knowledge. Much of this was passed along in the form of tails, odes, myths, and so on. At the same time practical skills like hunting and gathering were learned more effectively when verbal instructions and especially critics could be given. Students learned much faster and at a much higher level. The result was a differential in knowledge among the many, many small family groups and tribes.

After many millennia of inter-tribal wars and with some inter-tribal trading enough data, information, and knowledge was created to begin the long trek to civilization. [Sidebar: During the “hunter gatherer stage of human “civilization” there were no “Noble Savages”, just savages. According to DNA evidence and studies of tribes in New Guinea, the average male was killed when approximately 20 years old.] During the time from the Paleolithic through the Neolithic ages, knowledge accumulated very slowly. Archeologists have found the innovations diffused through the human population over hundreds of years. Many archeologists want to attribute this to trading, but evidence suggests that much of the time violence was involved.

The Age of Writing (~3000 BC)

Speech and language enabling and supporting Learning-by-doing and learning-by-listening provided the basis for humans’ knowledge development for the next 70,000+ years. It was not until human organizations grew beyond a few hundred individuals with a geographic territory beyond what a person could walk in a day that humans had a need for data, information, and knowledge transfer/communications that went beyond speech.

At about the time the first large kingdoms were formed, apparently the traders of the era found a need to track their trading. And traders and trading was the main vehicle for communicating data, information, and knowledge during this entire period. [Sidebar: At least this is what the archeologists have found so far.] Additionally, the tribal shaman (priests) started to create documents so that their religious beliefs, traditions, knowledge, and tenets would not be lost by their successors. [Sidebar: these were the scientists of their age.] Consequently, religious documents, together with trade documents are among the earliest writing found.

Understand, writing came into existence at about the same time as many large construction projects, like pyramids and ziggurats. And this was when city states, the forerunners of the modern state formed.

For the next 4400+ years writing continued to be the main medium for data, information, and knowledge documentation and communications. During this time many kingdoms and empires rose and fell, including The Roman Empire, and a vast quantity of data, information, and knowledge was created documented and lost. [Sidebar: The worst was the destruction of the Library and Museum (University) at Alexandria.]

Finally, with the beginnings of the European Renaissance in the 1100s AD schools in Italy and Spain, initially created to teach monks to read and write, began to collect and copy works from early times (including Greek and Roman). The copies were exchanged and libraries began to appear within these schools that were called and were Universities. [Sidebar: This age is called the “Renaissance” because it was the time when initially data, information, and knowledge were recovered and new knowledge was documented.]

During this same period, and in part using the recovered knowledge-base, came the slow innovation of new instruments including the mechanical clock, and new navigational instruments, and new methods for ship construction; all leading to an economic sea change in the European kingdoms. Further, during this time, apprentice schools (schools of learn-by-doing) appeared in greater number with more formality to their coursework. These schools taught “manual trades”, the start of formal engineering and technology programs.

The Age of Printing (1455 AD)

All during this time, more and more clerics, (clerks) were coping more documents. And though the costs were high, there was a major demand for more copies of books, like the Christian bible.

In about 1440, a German, Johannes Gutenberg, developed a system that could make hundreds of copies. In 1455, he printed what is known as the Gutenberg Bible and created the technology infrastructure for a paradigm shift. He also printed a goodly number of these bibles.

Another German, Martin Luther, subsequently kick started this shift by nailing his 95 theses to the door in 1517. Prior to Luther most Europeans could not read. The Roman Catholic clergy up to and including the Pope took advantage of this to create highly imaginative church doctrine that would provide them with a large money stream. Since they had been infected with the edifice complex they used this money stream to indulge their favorite activity at Rome and elsewhere.

Luther was intensely unhappy with this church doctrine in his theses. Instead of the Pope being the final Authority on Christianity, he preached that the Christian Bible was the final authority and that all Christians had the right to read it. So, by the late 1500s, there were many printed books in an increasing number of libraries with an increasing number of Europeans (and shortly, American colonists) that could read. [Sidebar: Remember that Harvard College, now Harvard University was founded in 1636.] And this was only step one of the Age of Printing.

Step two in the Age of Printing was Rev. John Wesley’s creation of “Sunday School”. Many or most of the members Wesley’s sect, “The Methodists”, had been tenant farmers, laborers, or cottage industry owners who had lost their jobs or their businesses in the early stages of the industrial revolution (The late 1600s and through 1700s).

At this time, machines began to be used on farms and in factories, putting these people out of work. Wesley and the Methodists, by teaching them to read and write on Sunday, their day off from work, enabled them to move into and participate in the profits of the industrial revolution. Together with other movements toward “schooling”, the age of printing and economic progress happened, creating the “middle class.” [Sidebar: In Colonial New England, early on—in the 1640s—primary schooling became a requirement. For more information, see my book.] Glossing over the many upgrades and refinements, knowledge creation and communications was base on printing technology until the 1980s, more or less.

During the Age of Writing, but particularly during the Age of Printing, the methods for the communications data, information, and knowledge began to diverge from trade. In fact, in the US Constitution the founding fathers treated the “the US mail” as a direct government function because they felt that communications for everyone was so important. On the other hand, they indicated that the government should “regulate” commerce among the states; and there is a great difference between a function of government and regulation by government.

The Age of Computing (~1940 AD)

There are two roots of the Age of Computing. These had to do with improving print-based data storage and the communication of data and printed materials. The first root was data and information communications. While there were many early attempts of high-speed communications over long distances in Europe over the ages, the first commercially success telegraph was developed by Samuel Morse in 1837 [Sidebar: together with a standard code, coincidently call the Morse Code.] By the 1850s this telegraphic system had spread to several continents.

By 1874, Émile Baudot invented the teletype machine which allowed any typist to type a message on a typewriter keyboard, which the machine would then translate into Morse code. A second teletype machine would then print the message out at the other end. This meant that typists, rather than trained telegraphers could send and receive messages. Additionally, the messages could be coded and sent much faster. Three other inventions/innovations the facsimile machine, the telephone, [Sidebar: a throwback to the Age of Speech],and the modem complete the initial intro the Age of Computing.

The second root was the evolution of the computer itself. Early in the industrial revolution, Adam Smith discussed the assembly line process and the fact that tooling can be made to improve the quality and quantity of output in every activity in the process. Using this process more or less, the hand tooling of the late 1700s gave was to increasing complex powered mechanical tooling for manufacturing products in the 1800 and 1900s.

While that helped the manufacturing component of the business, it did not help the “business” component of the business. While the need for improving the information handling component (reducing the time and cost) of a business was recognized in the 1500s, it wasn’t until 1851 that a commercially viable adding machine became available to help with the “book keeping/accounting” of a business. These machines produced a paper tape (printing) on which the inputs and output was reported.

From 1851 to at least 1955 these mechanical wonders were improved, to the point that in the early 1950s, they were call “analog computers”. And for a short time there was discussion about whether analog or this new thing called digital computers were better. [Sidebar: Into the 1990s tidal predictions were made by NOAA using analog equipment, since they kept proving to be more accurate.]

The bases for the electronic, digital computer came from several sources, mostly in the United States and in Britain, during the late 1930s and early to mid-1940s. However, it wasn’t until the invention of the transistor in 1948 coupled with the concept of the Turing Machine (Alan Turing, working from 1941 to 1950) that the first prototype commercial “electronic computers” were developed.

In 1956 I “played” with my first computer. It consisted of a Hollerith card reader for data input, electronics, a breadboard (a board with a bunch of holes arranged in a matrix) on which a program could be “coded” by connecting the holes with wires (soft wiring), and a 160 character wide printer for the output. The part I played with was the card sorter. Rather than sorting the data in the “computer”, it was done by arranging and ordering the Hollerith cards before inserting them into the card reader. The card sorter enabled the computer’s operator to sort them very much faster than attempting to sort them by hand.

By 1964, computers had internal memory, about 40K bits, and storage, tape drives (from the recording industry) and disks (giant multi-platter removable disks) holding up to 2MB of data. [Sidebar: I learned to code on two of these; IBM’s 1401 and 1620. I coded in machine language, symbolic programming system and Fortran 1 and 2.] These computers had rudimentary operating systems (OS) with input and output being a card reader and a punch card writer. And they had teletype machines attached as control keyboards.

Fast forward to 1975; by this time, Technology had advanced to the point where teletypewriters were attached as input/output terminals. These were running at 80 to 120 baud (charters per minute, fast for a human typing, but very slow for a computer). Some old style television-like (cathode ray tube, or CRT) terminals were becoming commercially available. Mostly, this were simply glass versions of teletype printers, allowing the use to type into or read from an 80 characters-wide by 24 lines long green screen; and it was at about the same speed as a 120 baud teletype. But, Moore’s Law was in high gear with respect to hardware so that with each two years, computers doubled in speed and capacity.

In about 1980 networking started to develop commercially, though there were several services over telephone networks earlier. [Sidebar: The earliest global data network that I know of was NASA’s network for data communications with the Mercury spacecraft in 1961.] Initially, this development was in terms of a Local Area Network (LAN), linked through the use of telephone cables. [Sidebar: During this time, I set up some LANs at Penn State University and at Haworth, a furniture manufacturing company.]

By 1985 the Internet protocols evolved. [Sidebar: Between approximately 1985 and 1993, a significant group of engineers created a set of protocols to international standards; they were called Open Systems Interconnect or OSI protocols. They were a set of protocols based on a seven layered model. This group formed one camp; the other was from an amorphous organically evolving TCP/IP group of protocols. This group included academics, hackers, and software and hardware suppliers. This group preferred TCP/IP because it was a free open source technology with few if any real standards—One HP Vice President said of TCP/IP that it was so wonderful because there were so many “standards” to choose from—and because OSI required significantly more computing power because of architectural complexity of its security and other functionality. Consequently, TCP/IP won, but we are now facing all of the security and functionality issues that would have been resolved by OSI.] [Sidebar: In 1987, I predicted that the internet would serve as the nervous system of all organizations and was again looked at like I had two heads.] And technology had evolved to the point the PCs on LANs were replacing CRTs as terminals to mainframe computers. Additionally, e-mail, word processing, and spreadsheet software were coming into their own, replacing typewriters and mail carried memo and documents.

In the early 1990s fiber optic cables from the Corning Glass Works revolutionized data and information transfer in that it was speeded up from minutes to micro-seconds with approximately the same cost. [Sidebar: Since I worked with data networks from 1980 on, and since I led an advanced networking lab for a major defense contractor, I could go into the hoary details for many additional posts, but I will leave it at that.] As fiber optics replaced copper wires, the speed of transmission went up and the cost went down. There were two consequences. First, the number of people connected to the internet drastically increased. Second, more people became computer literate, at least to the point of using automated devices—especially, the children.

By 1995, the Internet was linking home and work PCs with the start of web (~1993), and by the 1996/1997 timeframe the combination of home computers, e-mail, word processing, and the Internet/web was beginning to disrupt retail commerce and the print information system. At this point the computer started to affect all of data, information, and knowledge systems, which is disrupting culture worldwide.

User Interfaces and Networking

As I discussed in a previous post and in SOA and User Interface Services: The Challenge of Building a User Interface inServices, The Northrop Grumman Technology Research Journal, ( Vol. 15, #1, August 2007), pp. 43-60, there three sets of characteristics of every user interface. The first is the type of user interface, the second is how rich the interface is, and third, how smart the interface is.

There are three types of user interfaces, informational, interaction oriented, and authoring. The first is typical of the “Apps” on your smart phone, getting information. The second is transaction oriented. This means interacting with a computer in a repeated manner, like when an operator is adding new records to a database. The third is authoring. This doesn’t mean writing only, it means creating anything from a document, a presentation, to a movie, to a song, to an engineering drawing, or to a new “App”lication. This differentiation of the user interface only really developed in the late 1990s and early 2000s as each of these types requires a different form factor for the interface and increasingly complex software supporting it.

A rich user interface is an interface that performs many functions internally, i.e. does a lot for you. As computer chips have become smaller, using less power, and much faster, the interface has become much richer.This started with the first graphics terminals (in which there were 24 by 80 address locations) in the early 1970s. Shortly, real graphics terminals appeared costing upwards of $100K. These graphics terminals required considerable computing power from the computers they were directly connected with to operate.

In an effort to relieve the host computer of having to support the entire set user interface functions Intel and others developed chips for performing those functions. When some computer geeks looked at the functionality of these chips, (the Intel 8008 chip, among them) they decided they could construct small computers from them; the genesis of the PC [Sidebar: I was one of these. With two friends, a home grown electrical engineer and an account, I tried to convince a bank to loan us $5000 to start a “home computer” company and failed; most likely because of my lack of marketing acumen].

A smart user interface is one that that takes the information of a rich interface and intercommunicates with mainframe applications (“the cloud” as marketers like pretend is a new concept) and their databases to bi-directionally update (share) their data. Rich interfaces have rapidly evolved as network technology has grown from copper wire in the 1950s to fiber optics, Wi-Fi, and satellite communications as competing interconnection technologies at the physical through network layers of the OSI model. These enabled first the Blackberry devices and phones, then in 2003, the Iphone and competing products. The term “App” from application is a rich and generally “smart” user interface. [Sidebar: I put “smart” in quotes because many of these “rich/smart apps” require constant updating burning data minutes like they are free. When you allow them to only use Wi-Fi they complain bitterly.]

The library

Initially, in the late 1970s, information technology started to disrupt the printed information center, that is, the library. The library is the repository of printed documents (encompassing data, information, and knowledge) of the Age of Print. It uses a card catalog together with an indexing system, like the Dewy Decimal or Library of Congress systems, creating metadata to organize the documents to enable a library’s user to find documents containing data or information contained in the document pertaining to the user’s search requirements.

It started from the use of the rudimentary data bases’ (records management systems’) ability to control inventory, in the case of a library the inventory of books. Initially, automation managed the metadata about the library’s microfilm and/or microfiche collections. [Sidebar: The libraries used microfilm and microfiche technologies to reduce the volume and floor space of its collections as well as enabling easier searches of those collections. Microfilm and microfiche technologies greatly reduced the size of the material. For example, an 18 by 24 inch newspaper could be reduced to less than a two inch square (or rectangle). However, with so many articles in each daily paper, library patrons had difficulty finding articles on particular topics; enter automation.

Initially, the librarians used the one or two terminals connected to the computer to either enter the metadata about what was on the microfilm or fiche or pull that data for a library’s customer. They would enter the data using a Key Word In Context (KWIC) indexing system.

Gradually, as computing systems evolved the quantity and quality of metadata of what was in the libraries increased and access within the library’s computing system increased; generally with a terminal or two sitting next to the card catalog. However, none of the metadata was available outside the library.

With the advent of the World Wide Web standards and software (both servers and browsers) all of that changed. [Sidebar: Interestingly, at least to me, the two basic protocols of the web, HTML and XML were derivatives of SGML, Standard Generalized Markup Language. SGML is a standard developed by the printing industry to allow it to transmit electronic texts to any location and allow printers at that location to print the document. It’s ironic that derivatives of that standard are putting the printing industry out of business. One of the creators of SGML worked for/with me for awhile.]

With the advent of the Internet, browser and server software, and HTML (and somewhat later XML), the next step in the disruption of libraries as repositories of data, information, and knowledge started with search engines. The first commercially successful search engine was Yahoo. It used (as do all search engines) web crawler technology to discover metadata about websites then organizes it in a large database. The most successful search engine to date is Google; the key reason being that it was faster than Yahoo and contained metadata about more websites. These search engines replaced card catalogs of libraries before the libraries really understood what they were dealing with. This has been especially true since as a great deal of data and information has migrated to the web in various forms and formats.

One of the things many library users went to the library for, before the advent of the web, was to use encyclopedias, dictionaries, and other such materials. Now, Wikipedia and others sites of this type are the encyclopedias, dictionaries, thesaurus, and so on, of the Age of Computing. Additionally, many people read newspapers and magazines at the library. These too, are now available on any rich, smart user interface. [Sidebar: For the definitions see my paper on Services at the User Interface Layer for SOA. There is a link on this blog.] The net result is that libraries, as physical facilities, are nearly obsolete. Now “Big Data” (actually the marketing term for knowledge management of the 1990s) libraries and pattern analysis algorithms are taking data, information, and knowledge development of the library to the next level, as I will discuss shortly.

Imaging: Photos, Videos, Television, Movies, and Pictures

One of the greatest transformations, so far, from the Age of Print to the Age of Computing is in the realm imaging. Images, pictures if you will, have been found on cave walls inhabited in the early “stone age” and some written languages are still based on ideographs. So imaging is one of the oldest forms of communications.

Late in the Age of Writing, in the Italian Renaissance, images became much more realistic with the “discovery” of perspective. Up to that point images (paintings) had been very “two dimensional”; now they were three. Early in the Age of Print, actually starting with Guttenberg, wood cut images were included in printed materials. From ­­­1800 onward, a series of inventors created photography, capturing images on a photo-reactive film. Lithography allowed these images to be converted into printed images. Next moving images, the movies came into being; as well as color photography.

From the 1960s, the U.S. Defense Department started looking for methods and techniques to gather near real-time intelligence by flying over the area—in this case areas in the USSR; and the USSR objected. The first attempt was through the use of aerial photography, which started with a long winged version of B-57, then the U2, and finally the SR-71. All of these used the then state-of-the-art film-based photography. But all had pilots and only the SR-71 was fast enough to evade anti-aircraft missiles.

So a second approach was used, sending up satellites and then parachuting the film back to earth. There were two major problems with this approach. First, was getting the satellite up in a timely manner. Rockets at the time took days to launch so getting timely useful data was difficult. Second, having the film canister land at the proper location for retrieval was difficult.

Therefore, the US government looked for another solution. They, and their contractors, came up with digital imaging. This technology crept into civilian use over the next 20 years. Meanwhile, the photographic industry, in the main, ignored it, in part, because of the relatively poor quality of the images early on. But this improved, both the resolution and the number of colors. Among others, this led to the demise of Kodak and Fuji Films.

Another part of the reason the photo film industry ignored digital imaging is the quantity of storage and the physical size of the storage units required to store digital images. But as Moore’s Law indicated, the amount of storage went up while the cost dropped drastically and this size of the hardware needed decreased even more. With the advent of SD and Micro-SD cards there was no need for film. And with the advent of image standards like .tif, .gif, and .jpg the digital images could be shared nearly instantly.

Retail Selling

From before the dawn of history, until 1893, trade (buying and selling) was a face to face business. In 1893, Sears, Roebuck, and Company started selling watches and then additional products by catalog using the railroad to deliver the goods. When coupled with the Wells Fargo delivery system—across the railroad system—allowed people in small towns to purchase nearly any “ready-made” goods, from dresses to farm implements. This helped mass production industries and helped to create cities of significant size. It then followed (or led) the way by building retail outlets (stores) in every town of even small size.

This model of retailing is still the predominate model, but is the one being challenged by the Sears and Roebucks catalog model in an electronic internet-based form of retailing. Examples include the electronically based, Amazon, eBay, and Google. Amazon rebooted the no bricks and mortar retailing catalog with an internet version. It is successfully disrupting the retail industry. Likewise, eBay used the earliest market model, trading in the local market, in a global version. Early on in the existence of the internet various groups developed search engines. Currently, Google is the primary search engine. But it is supporting a concierge service which the Agility Forum, The Future Manufacturing Consortium said would be a requirement for the next step in manufacturing and retailing, that is, mass customization.

Additive Manufacturing

Early in my studies in economics, the professors tied economic progress of the industrial age, to mass production, to economies of scale. However, in the Age of Computing mass production is giving way to mass customization.

Initially, in the 1970s, robotic arms were implemented on mass production lines to reduce the costs of labor [Sidebar: especially in the automotive industry. At the time US automakers found it infeasible to fire inept or unreliable employees do to union contracts. Additionally, the labor costs, do to those contracts priced the US automobiles out of competition with foreign automakers. To reduce their labor costs the automakers tried to replace labor with robots numeric controlled machines. They had mixed success do to both technical and political issues raised. This is not unlike the conversion of the railroads for steam to diesel and the “featherbedding that forced many railroad into contraction or bankruptcy.] By the 1990s automation and in particular agile automation (automation that is leading to mass customization) is becoming the business-cultural norm in manufacturing and fabrication industries. Automation is replacing employees in increasingly complex activities. It will continue to do so and will continue to enable increasing mass customization of products.

For thousands of years components for everything from flint arrowheads to automobile engine blocks to sculptures were created by subtracting material from the raw material. This subtracted material is waste. A person created a flint arrowhead by removing shards from a flint rock.

Automobile engine blocks are created by metal casing, then milling the casting to smooth the surfaces for the moving engine components.

Stone and wood sculptures use the same material removal procedures as creating an arrowhead. These too create waste. Some cast sculptures may not be milled or polished, but these are the exceptions and the mold for the casting is still waste material.

Recently, a process similar to casting called injection molding does create products with relatively little waste. But most component manufacturing processes create considerable waste.

However, with the rise of ink jet printing technology, people began to experiment with overlaying layers of material and found they could create objects. This technology is called 3D printing or additive manufacturing. It will have a much greater impact on manufacturing and mass customization.

A simple example is car parts for older model vehicles. A car enthusiast orders a replacement part for the carburetor in his 1960s vintage muscle car. The after-market parts company can create the part using additive technology rather than warehousing hundreds of thousand parts, just in case. The enthusiast gets a part that is as good as, or perhaps better than, the original, the after-market parts company doesn’t need to spend money on warehousing, and the manufacturing process doesn’t product waste (or at least only a nominal amount).

Research and development is using this technology is now looking at creating bones to replace bones shattered in accidents, war, and so on; in nano-versions to create a wide variety of products. [Sidebar: Actually, one of the first “demonstrations” of the concept was on the TV show, Star trek, where the crew went to a device that would synthesis any food or drink they wanted.]

In the future this technology will disrupt all manufacturing processes while creating whole new industries because it can create products that meet the customer’s individual requirement better, while costing less, and being produced in less time. For example, imagine a future where this technology can create a new heart identical to the heart that needs replacement, except fully functional—researchers are looking into the technology that could, one day, do that.

Automotive

The automotive industry is already starting to feel the effects of the Age of Computing. The automotive industry has been based on cost efficiency since Henry Ford introduced the assembly line. The industry was among the first to embrace robots on the assembly line. But, there is much more.

The cell phone is becoming the driver’s interactive road map. This road map tells the driver which of several routes is the shortest with respect to driving duration based on the current traffic and backups, as well as speed, and distance.

Since the 1970s automobiles have had engine sensors and “a computer” to help with fuel efficiency and identifying engine malfunctions. These have become increasingly sophisticated.

Right now the automotive industry is driving toward self-driving cars. There some on the roads and many that have sensors (and “alerts”) that “assist” drivers in one or more ways.

In the Near Future

And there are many industries like the automotive industry which are feeling the effects of The Age of the Computer. That is, there are many more systems which the technology and processes of the Age of Computing are disrupting.

While processes are in transformation today, it’s nothing compared with what will happen in the immediate and not very distant future.

Education

Shortly, in the Age of Computing, information technology will disrupt schools. People learn in two ways, by doing (showing, or “hacking”) and by listening. And everyone learns using differing combinations of these two methods.

Technology can and will be used to “teach” in all of these combinations. Therefore, “the classroom” is doomed.

Some students learn by doing, a method that “academics” pooh-pooh; only “stupid” children take shop and apprenticeships don’t count, you must of a “degree” to get ahead.

However, children do learn by doing, and enjoy it. Why do you think that so many boys, in particular, choose to play video games?

Why is it, that pilots of the United States Navy have go through 100 hours or more of computer simulations before trying a carrier landing? Why, because they learn by doing.

In the near future most of the jobs will require learn by doing. Learn by doing includes simulations, videos, solving problems, labs. Automation has and increasingly will impact all of these, giving the learn-by-doers the opportunity the current mass production education system doesn’t.

The other method for learning is “learn by listening”. Learn by listening includes reading and audio (audio includes both lectures and recordings of lectures). Over the past two hundred years, these have been the preferred methods of “teaching” in mass production public schools.

In the main, it has worked “good enough” for a significant percentage of the students, but numbers of students have fallen from the system. Part of the problem is that some teachers can hold the interest of some students better than other students, other teachers may hold the interest an entirely different group of students, and some may just drone on.

Now, using the technology of the age of computers, students will be able to listen to lectures from teachers that they are best suited to learn from. This means that the best teachers are able to teach hundreds of thousands of students across the globe, not just the 30 to 50 using the tools of the age of print.

It also means that students can learn in ways the more align with their interests. [Sidebar: I saw a personal example of this when I was working on my Ph.D. at the University of Iowa. The Chair of the Geography Department, Dr. Clyde Kohn was also a wine connoisseur. He decided to offer a course, called “the world of wines” to a group of 10 to 15 students. He would teach them about climates and geomorphology (soils, etc.) that create the various varieties of wine. He would also teach them about wine making and distribution worldwide; so there was physical and economic geography involved. In the first 5 minutes of enrollment the class was filled and students were clamoring to get into to it. He opened it up. By the time all students had enrolled there were 450 students in the geography class and they probably learned more geographic information than they ever had before. It also gave the state legislator apoplexy.] As the technology becomes more refined, students will be able to learn whatever they need to learn without ever going near a classroom. I suspect that home (computer) schooling will become the norm. Even “class discussion” can be carried on using Skype/Gotomeeting/etc. like tools. Sports will be team-based rather than school-based.

I will define a prescriptive architecture for education in another post. It turns the educational system on its head. [Sidebar: Therefore, it will be ignored by the academic elite.]

Medicine

Medicine, too, is starting and will continue to go a complete disruption of the way it is performed (not practiced).

Currently, most of medical performance is in the rational “weegee”-boarding stage and uses mass production methods, not mass customization. But all people are biological experiments and are, consequently, individuals. And every malfunction should be treated the same way.

To get the best result for the individual, each type of drug and dosage of that drug should be customized for the individual from the start—not by trial and error.

In the near future, people will be diagnosed using their complete history, analyzing their DNA, body scanning, and other diagnostic measurements (both current and undiscovered). Then, using additive nano-technology an exact prescription will be created. The medicine may be a single pill, mixed with a liquid, through a shot, or some other method, introduced into the individual.

Much of this analysis will be done by a computer. Already, in the 1070s, a program simulated a patient, so that medical students could attempt to diagnose the “patient’s” problem. In order for this program to serve its intended function, the MDs and Computer Assisted Instruction mavens were continually refining the data used by the program. If this continued, and I suspect it did, the database from this single program could have been used by an analysis program to produce a diagnosis that would be comparable with that of expert diagnosticians.

This type of program could be, and likely will be, used by every hospital in the country, saving time and a great deal of money in identifying problems. The key reason that it is not used today is that it has poor “bedside” manners—but so to do many of the best diagnosticians.

Also, in many situations, this will take “The Doctor” out of the loop.

For example, instead, the patient walks into “the office”, which may be in front of the home computer. The analysis “App” asks the patient questions and gets the patient’s permission to access his or her medical record. If the patient is at home and the “Analysis App” needs more information, the app may ask the patient user to go to the nearest analysis point of service (APOS) for further tests.

At the APOS the patient would lay on a diagnostic table, not unlike those mocked up in Star Trek. This table would have all sensors needed to take the necessary measures—in fact; there will be a mobile version of this table in the back of a portable APOS vehicle.

Once the analysis is complete, the APOS will use additive manufacturing to incorporate all of the medicines needed in a form usable by the patient.

For physical trauma or where this is irreparable damage to a bone or organ, additive manufacturing will create the necessary bone or organ and a robotic system will then transplant it into the patient’s body.

The heart of this revolution in medical technology is Integrated Medical Information System based on the architecture I’ve presented in the post entitled “An Architecture for Creating an Ultra-secure Network and Datastore.” Without such an ultra-secure system for the medical records of each individual, externalities are too grave to consider.

However, even with an Integrated Medical Information System there will be substantial side effects for all stakeholders, doctors, nurses, technicians, and patients. There need no longer be any medical professions, except for medical research organizations.

Because the recurring costs of an APOS are low when compared with the current doctor’s office/hospital facility, all people should be able to pay for their own medical costs. So there will be little or no need for insurance.

Additionally, because medicines are manufactured on a custom basis as needed by the patient, there will be no need for pharmacies or systems for the production and distribution of medicines.

With no medical professionals, no insurance, and no need for the production and distribution of medicine, this whole concept will be fought, in savage conflict, by the those groups, as well as Wall Street and federal, state, and local welfare agencies, all of whom will lose their jobs. However, it will be inevitable, though perhaps greatly slowed by governmental regulation.

Again, I will say a good deal more on this topic in a separate post.

Further into the Future

There are three alternative future cultures possible in the Age of Computers, the Singularity, Multiple Singularities, or the Symbiosis of Humans and Machines. These may all sound like science fiction or fantasy, but they are based on my 50+ years of watching the Age of Computers and technology advance.

The Singularity

In a story that someone told me in the 1960s, a man created a complex computer with consciousness. He created it to answer one question, “Is there a god?” The computer answered, “Now there is.” A definition of “The Singularity” is that all of the computers and computer controlled devices, like smart phones become “cells” in a global artificial consciousness.

Many science fiction writers and futurists have speculated on just such an occurrence and its implications. John von Neumann first uses the term “singularity” in the early 1950s as applied to the acceleration of technological change and the end result.

In 1970, futurist Alvin Toffler wrote Future Shock. In this book, Toffler defines the term “future shock” as a psychological state of individuals and entire societies where there is “too much change in too short a period of time”.

The Singularity Is Near: When Humans Transcend Biology is a 2006 non-fiction book about artificial intelligence and the future of humanity by Ray Kurzweil

Many science fiction writers and many movies have speculated about what happens when the Singularity arrives. For the most part these stories take the form of Man/Machine Wars or conflicts. In the first Star Trek movie, the crew of the Enterprise had to battle” a world consuming machine consciousness. In the Terminator series of movies it’s man versus machine and man and machine versus a machine. And in The Matrix, it’s about man attempting to liberate himself from being a slave of the machine consciousness. [Sidebar: In the mid-1970s I had a very interesting discussion with Dr. John Crossett about the concept that formed the plot for The Matrix.]

There are literally hundreds of other books and short stories about dealings and conflicts with the singularity. While this is all science fiction, science fiction has often pointed the way to science and technology fact.

Multiple Singularities

A second scenario is that because of the advances in artificial intelligence there are multiple singularities. Again, Science Fiction has dealt with this scenario. Isaac Asimov was one that dealt with multiple singularities and the results in his I Robot series of stories. In this scenario, more than one robot achieved consciousness. In these scenarios, humanity plays a subordinate role to the “artificial intelligence”. These singularities interact with each other in both very human and very un-human ways.

Symbiosis of Humans and Machines

The best set of scenarios, from the perspective of humanity, is the symbiotic scenarios. All multi-cell life, above a very rudimentary level is composed of a symbiosis of cells and bacteria. So it is reasonable that there could be a symbiosis of humans and machines.

For example, nano-bots could be inserted that would deliver toxins to cancerous cells to directly kill those cells, to inhibit their transmission of the cancer causing agent to other cells or to link with brain with orders to repair any damaged cells. These nano-bots would be excreted when their work is complete.

Taking this a step further, these nano-bots could allow the human brain direct access to the information on the Internet or “in the cloud” (as marketers like to say). [Sidebar: “Cloud Computing” has been with us ever since the first computer terminals used a proprietary network to link themselves to a mainframe computer. Yes, the technology has been updated, but it’s still remote computing and storage.] This would mean that all you would have to do is think to watch a movie, or gain some knowledge about the world around you. The very dark downside of this is that terrorists, politicians, news commentators, or other gangsters and thugs could control your thinking, i.e., direct mind control. And actually the artificial consciousness could take over and use human to their benefit. [Sidebar: Remember a thief is nothing more than a retail politician, retail socialist, or retail communist. Real politicians, socialists, and communists steal at the wholesale level.] This mind control is the ultimate greedy way to steal—anyone whose mind is controlled is by definition a slave of the mind controller.

“Space the Final Frontier”

I see only one way out of the mind-control conundrum, traveling into and through space. Once humans leave the benign environment of the earth, the symbiosis of humans and machines (computers and other automation) becomes imperative for both humans and their automated brethren. Allies are not made in peace, only when there are risks or threats.

Even the best astronomical physicists readily admit that while we don’t understand our universe, as humans we may never be able to understand the universe. There is simply too much to fathom. However, with the symbiosis with artificial consciousness, we may be able to take a stab at it.

This Post is of an unpublished paper I wrote in 2008 and 2009. Dr. Chris Marai helped me by commenting and proposing edits. The concepts it presents and some sections of the writing I subsequently used in my book, Organizational Econom…

The thesis of this post is that it is pretty silly to base an economy, like that of the United States, on housing, finance, and government, which is what Wall St. and Pennsylvania Ave. seem to want to do.

Types of Industries

All organizations are constructed from three types of sub-organizations, which are within their domain.The Domains would normally be considered as political unit as per example, a city, county, state, or country.However, even in private organizations, these types of organizations exist, within the organization’s functions and departments.These organizational categories[i] are:

·Primary Industry – Organizations that are in an industry that creates a product or service that is exported beyond the boundaries of the domain within which it is produced.

·Secondary Industry – Organizations that are in an industry that enables and supports one or more of the processes of the primary industry within the domain it operates.

·Tertiary Industry – Organizations that in an industry that enable and supports both the primary and secondary industries by providing services that support the environment in the domain within which the primary and secondary industries operate.

As I demonstrate in my book, Organizational Economics: The Formation of Wealth, the primary industry (or industries) is the economic engine that forms the value of the organization for other organizations. Hamel and Prahalad called the turbine of this engine, the organization’s core competence.[ii] It produces the value for the organization.All other “industries” enable and support this engine.For example, the economic engine and primary industry for Detroit Michigan, has been and continues to be the automotive industry; in “silicon valley” it’s information technology, the State of Iowa is agriculture, and so on.

Secondary industries are sub-contractors and suppliers of hardware, software, and services to the primary industries.These industries would include auto parts suppliers, tool manufacturers, transportation within the organizational domain, and other organizations directly supporting the primary industry or industries.

Tertiary industries are organizations that enable and support the personnel, or the domain’s infrastructure.Schools, colleges, and universities, banks and other financial services, municipal services (e.g., electric, communications, roads and bridges, sewer, water, and so on), food stores, and other stores, hospitals and other medical services, restaurants, fast food outlets, and so on.In other words, the majority of economic activities within an organizational domain.Additionally, tertiary industries includes all types of construction.It also includes the defense (see Security a Mission of Government).These industries are where most of the economic activity of an organization occurs.

Types of Value

In the first chapter of Organizational Economics, I describe three types values, knowledge value, capacity value, and political value.

Knowledge value (see Knowledge Value) is value created by an increasing knowledge-base and includes research and development (invention and innovation), and knowledge transfer (education). Products based on new scientific discoveries and transferred into production are the most high valued.Unique user interface designs like the iPhone or innovative medicines are examples of knowledge value.

Capacity value (see Capacity Value) is “more of the same” value.Once a product has been perfected and competitors have brought out versions, then what Adam Smith called “the invisible hand” starts to force reduction in cost of the product.Many economists refer to the as commoditization of a product, but its value is in capacity production—which produces capacity value.

Political value (see Political Value) is of two types, mediating and exploitive.

·Mediating (or mediated) political value is created by reducing the organization’s internal process friction.Examples of mediating political value include contracts, laws, customs, codes, standards, policies, and so on.In the military, mediating political value (reduction in process friction) comes from “the rules of engagement” (e.g., don’t shoot your fellow military).The reduction in process friction is very often the difference between a process adding value and a process absorbing value.The regulation of markets (and the processes of markets, themselves) is such an example.

·Exploitive political value is indirect or “siphoned” value.It is caused by someone in the position of responsibility or authority using the position for the reaping of value to their own benefit; “The Lord of the Manor” is the archetypal example, those these include dictators, lobbyists, bankers, day traders, and many judges and legislators.Further, as I describe in my book, in many cases it includes various religious authorities.

Housing, Finance, and Government as Value creators

My thesis isthat housing, finance, and government either do not create value or very little value.I base this on the understanding on how these fit within the dimensions described in the previous sections.

Housing

A house is worth a house.While that seems to be a tautology (and it is), too many people forgot that during “the housing bubble”.What that saying means is that the value of the house is only what value it imparts to the consumer of the house’s value.The house is never worth more than when it was built, unless it is maintained and upgraded.And even when it is upgraded the value of house begins to decrease as it is used (what’s being used, at the most abstract is its value).The problem, recently, has been that governments tend to inflate their money supply—money being a reserve of value.With the inflation of money (that is, the decrease in the value of money) the price of a house to increases—though its value remains the same; it’s worth one house. Likewise, when the housing market “goes down”, the price of the house goes down, but the value remains the same; one house.

House construction and remodeling is a tertiary economic activity.It produces some capacity value (more of the same value) for the builder and construction workers, but once completed and purchased, it starts loosing value.In giving the people of the organization a place to live, a house supports the secondary and primary industries of the organization.

Obviously, this is not an activity that enables and supports the formation of wealth for an organization.Consequently, basing an economy on housing, or at least a significant portion of an economy is foolish and silly.Yet, in the period from 1995 to 2007, that is what many Americans built the perceived wealth on, and what the United States did.

Finance

Finance includes two subtypes; banks and markets.The Wall Streeters, (e.g., bankers, hedge fund managers, stockbrokers, pension fund managers and so on) have forgotten that a bank is a value battery and “a market” is the transfer point for the value.

Banks dilute stored value of money through investments that increases risk and potentially increases the amount value through the implementation of discoveries and inventions as new products, systems, or services.In and of itself, investing cannot increase the amount of value only reduces it.Only when the money is invested in innovative ideas or the production capability (seeROI Vs VOI) does the value increase, so that, for example, loaning money for a house does not increase the value of the house or create value of any sort.However, if a bank loans money to a farmer to buy seed or farming implements, the bank has made an investment that does create capacity value—food.Consequently, banks are tertiary activities that do not produce an increase value, but they loan their repository of potential value (Money) to primary and secondary activities that do.

In the process of each transaction, the bankers siphon off some of the value as a “transaction” fee.This siphoning is converting potential value into exploitive political value; and exploitive political value is value that is quickly destroyed.

Markets have two missions.The first is to measure the value of a material, product, or organization. The second is to transform value from real to potential and back; that is trade materials or stocks for money (potential value) or money for materials and stocks.“Making a market” does both of these; and in this Internet age, anyone can do this.That is, the person can buy commodities, hold them, and sell them.In the process, the price of the commodity (be it materials, products, or stocks) converges on a price.

Again, market are tertiary activities that can convert knowledge and capacity value into potential value and the reverse.And, again, the “market makers” and “stock brokers” that siphon a percentage off, because they are “providing a service” (which to some degree they are), are converting some of the value and potential value into exploitive political value.Unfortunately, a good many Wall Streeters have turned the markets into legal mega-slot machines, gaming them through “day trading” and even “micro-second trading” to siphon off a much value as possible as quickly as possible, converting it into exploitive political value.

Government

According to my Book, Organizational Economics: The Formation of Wealth, and as note above in this post, a government has three missions—security, standards, and infrastructure (see. Internal and External security, standards, and infrastructure are mediating political value and all three are tertiary activities, that is, necessary but not sufficient conditions for the growth of value within the domain of the organization.Further, the second and third activity can be Quaternary.That is activities, like the enactment of laws and determination of regulations, policies, and standards that enable the standards and infrastructure activities.These activities are very susceptible to manipulation for personal gain.The personnel that enact or fund the activities can enjoy an extreme amount of exploitive political value, as I describe in my book.In the past, it has been the lord of the manor, dictator, duke, king, emir, priest, shaman, rabbi, Imam, or other religious leader.Today, lobbyists must be included as they encourage the lawmakers to create uneven economic playing fields that favor one activity or one industry over another; this includes unions and other “not for profit” organizations as well as economic organizations. Consequently, mediated political value is at best much more easily converted into exploitive than either knowledge or capacity value, and is the catalyst for the conversion of these.

In this age, “Entitlements” are the single biggest place that creates exploitive political value. These safety nets drain value from the infrastructure portion of government.They are popular because the exploitive value goes into the pockets of the many rather than the few and popular with politicians because Entitlements buy votes.But, entitlements are unsustainable for any organization as Greece and Italy have proven, and like the United States is likely to prove, now that the population is addicted to Entitlements.For example, the occupy Wall St. movement feels that all college graduates are “entitled” to jobs (so what value is art history or black studies to an economic organization?).

The Net Result

Too much “unearned income” in too few wallets; too much “Entitlement income” in too many wallets.I think what I’ve shown is that having an economy based on housing, finance, and government, like that toward which the United States is heading, is a sure recipe for going out of business.

We still have time, but do we have the leadership?

[i]These categories of industries were generally accepted in the 1920s onward, as primary: mining, and agriculture, secondary, manufacturing, and tertiary, services—these definitions are outdated and don’t get at the underlying concepts.Therefore, I’ve redefined them for a more general meaning of the concepts.

[ii]G. Hamel and C. Prahalad, Competing for the Future: Breakthrough Strategies for Seizing Control of Your Industry and Creating the Markets of Tomorrow, (Boston: Harvard Business School Press, 1994).

Until the early 1960s, the discipline of architecture (or functional design) focused on the creation/design/ development/implementation of products like buildings, cars, ships, aircraft, and so on. Actually, other than buildings, most of the Architects were called “functional” designers, or some such term, to differentiate them from detailed designers and engineers/design analysts. This is part of the reason that most people associate architecture and an architect with the design of homes, skyscrapers, and other buildings, but not with products, systems, or services. In fact Architects themselves are having a hard time identifying their role.

In the late 1990s, the US Congress mandated that all Federal Departments must have an Enterprise Architecture to purchase new IT equipment and software. The thrust of the reasoning was that a Department should have an overall plan, which makes a good deal of sense. I suspect the term “Enterprise Architecture” to denote the unification of the supporting tooling, though they could have used “Enterprise IT Engineering” in the manner of Manufacturing Engineering, which unifies the processes, procedures, functions, and methods of the assembly line. And yet, Enterprise Architecture means something more, as embodied the the Federal Enterprise Architecture Framework (FEAF). The architecture team that created this framework to recognize that processes, systems, and other tooling must support the organization’s Vision and Mission. However, its up to the organization and Enterprise Architect to implement processes that can populate and use the data in the framework effectively. And that’s the rub.

Functions vs Processes and Products vs SystemsIn the late 1990s and early 2000s the DoD referred to armed drones as Unmanned Combat Air Vehicles (UCAVs), then in the later 2000s, they changed the name of the concept to Unmanned Combat Air Systems (UCAS). Why?

There are three reasons having to do with a change in western culture, the most difficult changes for any organization. These are: 1) a change from linear process understanding to linear and cyclic, 2) a change from thinking about a set of functions to understanding a function as part of a process, and a change in thinking from product to system.

Linear vs Cyclic Temporal ThinkingProduct thinking is creating something in a temporally linear fashion, that is, creating a product has a start and an end. D. Boorstin in the first section of his book, The Discovers, discusses the evolution of the concept of time, from its cyclic origins through the creation of a calendar to the numbering of years, to the concept of history as a sequence of events. To paraphrase Boorstin, for millennia all human thinking and human society was ruled by the yearly and monthly cycles of nature. Gradually, likely starting with the advent of clans and villages a vague concept of a linear series of events formed. Still, the cycles of life are still at the core of most societies (e.g., in the east, the Hindu cycles, and the Chinese year, and in the West, Christmas and New Years, and various national holidays).

The concept of history change cultural thinking from cycles to a progression through a series of linear temporal events (events in time that don’t repeat and cause other events to occur). In several centuries this concept of history permeated Western Culture. The concept of history broke and flattened the temporal cycles into a flat line of events. With this concept and with data, information, and knowledge, in the form of books, meant that Western culture now had the ability to fully understand the concept of progress. Adam Smith applied this concept to manufacturing, in the form of a process, which divided the process into functions (events), and which ended up producing many more products from the same inputs of raw materials, labor, and tooling.

Function vs Process

In the Chapter 1 of Book 1 of An Inquiry into the Nature and Causes of the Wealth of Nations(commonly called The Wealth of Nations), Adam Smith discussed the concept of the”Division of Labour”. This chapter is the most important chapter of his book and the concept of the Division of Labor is the most important concept; far more important than “the invisible hand” concept or any of the others. It is because this concept of a process made from discrete functions is the basis for all of the manufacturing transformation of the Industrial Revolution. Prior to this, the division of labor was an immature and informal concept; after, many cottage industrialists adopted the concept or were put out of business by those that did.

Adam Smith did this by using a very simple example, the making of straight pins. In this example he demonstrated that eight men each serving in a specialized function could make more than 10 times the number of pins in a day when compared with each of the men performing all the functions. He called it the division of labor; we call it “functional specialization“.

Functional specialization of skills and tooling permeates Western Culture and has led to greater wealth production than any prior concept that has been created. Consequently, as Western Civilization accreted knowledge, the researchers, engineering, and skilled workers became more expert in their specialized function and increasingly less aware of the rest of the process.

Currently, most organizations are structured by function, HR, accounting, contracts, finance, marketing or business development, and so on. In manufacturing there are designers (detailed design engineers), engineers (analysts of the design), manufacturing engineers and other Subject Matter Experts (SMEs). Each of these functions vie with one another for funding to better optimize their particular function. And most organizations allocate funding to these functions (or sometimes groups of functions) for the type of optimization.

Unfortunately, allocating funds by function is a very poor way to allocate funds. There is a principle in Systems Engineering that, “Optimizing the sub-systems, sub-optimizes the system“. J.B. Quinn, in “Managing Innovation: Controlled Chaos”, (Harvard Business Review, May-June 1985), demonstrated this principle, as shown in Figure 1.

Figure 1–Function vs Process Funding

As shown in Figure 1, at the bottom where you cannot really see it, for every unit of money invested in a function, the organization will get, at best, one unit of money improvement in the total process. However, if the investment effects more than one function﻿ would yield 2(N-1)-1 in total improvement in the process. So focusing on investing in the process will yield much better results and focusing on the function. This is the role of the Enterprise Architect, and the organization’s process and systems engineer using the Mission Alignment process. While this point was intuitively understood by manufacturing (e.g., assembly line manufacturing engineering) for well over 150 years, and was demonstrated in 1985, somehow Functional Management is not willing to give up their investment decision perquisite.

Product vs System

Influenced by the Wealth of Nations, from about 1800 on, industries, first in Britain, then across the Western world, and finally globally, used Adam Smith’s concept of a process as an assembly line of functions to create more real value than humankind had ever produced before. But this value was in the form of products–things. Developing new “things” is a linear process. It starts with an idea, an invention, or an innovation. Continues with product development to initial production and marketing. Finally, if successful, there is a ramp up of production, which continues until superseded by a new product. This is the Waterfall Process Model.

The organization that manufactured the product had only the obligation to ensure that the product would meet the specifications the organization advertised at the time the customer purchased the product, and in a very few cases, early in the product’s life cycle. Generally, these specifications were so general, so non-specific, and so opaque that the manufacturing company could not be held responsible. In fact, a good many companies that are over 100 years old, exist only because they actually supported their product and its specifications. Their customers turned into their advertising agency.

This model is good for development (what some call product realization) and transformation projects, but the model has two fatal flaws, long term. The first (as I discuss in my post Systems Engineering, Product/System/Service Implementing, and Program Management) is that the waterfall process is based on the assumption that “All of the requirements have been identified up front“; a heroic assumption to say the least (and generally completely invalid). The second has equal impact and was caused by the transportation and communications systems of the 1700s to the 1950s. This flaw is that “Once the product leaves of the factory it is no longer the concern of the manufacturer.”

This second flaw in historical/straight line/waterfall thinking effects both the customer and the supplier. The customer had and has a hard time keeping the product maintained. For example, most automobile companies in the 1890s did not have dealerships with service departments; in fact they did not have dealerships, as such. Instead, most automobiles were purchased by going to the factory or ordering by mail. And even today, most automobile manufacturers don’t fully consider the implications of disposal when design a vehicle. So they are thinking of an automobile as a product not a system or system of systems (which would include the road system and the fuel production and distribution systems. The flavor of this for the United States is in its disposable economic thinking; in everything from diapers to houses (yes, houses…many times people are purchasing houses in the US housing slump, knocking them down, to build larger much more expensive housing…at least in some major metropolitan areas). Consequently, nothing is built to last, but is a consumable product.

Systems Thinking and The Wheel of ProgressSince the 1960s, there has been a very slow, but growing trend toward cyclic thinking with organizations. Some of this is due to the impact of the environmental movement, and ecosystems models. More of this change in thinking is due to the realization that there really is a “wheel of progress”. Like a wheel on a cart, the wheel of progress goes through cycles to move forward.

The “cycle” of the “wheel of progress” is the OODA Loop Process, that is, Observe, Orient, Decide, Act (OODA) loop. The actual development or transformation of a system occurs during the “Act” function. This can be either a straight-line, “waterfall-like” process or a short-cycle “RAD-like” process. However, only when the customer observes the of the transformed system in operation, orients the results of the observation of the system in operation to the organization’s Vision and Mission to determine if it is being effective and cost efficient, then deciding to act or not during the rest of the cycle. The key difference between product and systems thinking is that each “Act” function is followed by an “Observe” function. In other words, there is a feedback loop to ensure that the output from the process creates the benefits required and that any defects in the final product are caught and rectified in the next cycle before the defect causes harm. For example, Ford treated is Bronco SUV as a product rather than a system. “Suddenly”, tire blowouts on the SUV contributed to accidents, in some of which the passengers were killed. If Ford had treated the Bronco as a system, rather than a product, and kept metrics on problems that the dealers found, then they might have caught the problem much earlier. Again, last year, Toyota, also treating their cars as products rather than systems, found a whole series of problems.

OODA Loop velocityUSAF Col. John Boyd, creator of the OODA Loop felt that the key to success in both aerial duels and on the battlefield is that the velocity through the OODA Loop cycle was faster than your opponent’s. Others have found that this works with businesses and other organizations as well. This is the seminal reason to go to short cycle development and transformation. Short cycle in this case would be 1 to 3 months, rather than the “yearly planning cycle” of most organizations. Consequently, all observations, orientation and deciding should be good enough, not develop for the optimal, there isn’t one. [this follows the military axiom that Grant, Lee, Jackson, and even Patton followed “Doing something now is always better than doing the right thing later”.] Expect change because not all of the requirements are known, and even if they are known, the technological and organizational (business) environment will change within one to three months. But remember the organization’s Mission, and especially its Vision, change little over time; therefore the performance the metrics, the metrics that measure how optimal the current systems and proposed changes are, will change little. So these metrics are the guides in this environment of continuous change. Plan and implement for upgrade and change, not stability–this is the essence of an agile systems.

This is true of hardware systems as well as software. For example, in 1954, Haworth Office Furniture started building movable wall partitions to create offices. Steel Case and Herman Miller followed suit in the early 1960s. At that point, businesses and other organizations could lease all or part of a floor of an office building. As the needs of the organization changed these partitions could be reconfigured. This made for agile office space, or office systems (and the bane of most office workers, the cubicle), but allows the organization to make most effective and cost efficient use of the space it has available.

The Role of the Systems Engineering DisciplinesThere are significant consequences for the structure of an organization that is attempting to be highly responsive to the challenges and opportunities presented to it, while in its process for achieving its Mission and Vision in a continuously changing operational and technical environment. It has to operate and transform itself in an environment that is much more like basketball (continuous play) than American football (discrete plays from the scrimmage line with its downs)–apologies to any international readers for this analogy. This requires continuous cyclic transformation (system transformation) as opposed to straight line transformation (product development).

Treating Process in Product Thinking TermsStarting in the 1980s, after the publication ofQuality is Free, by Phil Crosby in 1979, the quality movement and quality circles, the concept of Integrated Product Teams (IPTs, which some changed to Integrated Product and Process Teams, IPPTs) organizations have been attempts to move from a focus on product thinking toward a focus on system thinking). Part of this was in response to the Japanese lean process methods, stemming in part from the work of Edward Deming and others. First international attempt to is ISO 9000 quality Product Thinking (starting in 2002), though in transition to Systems thinking, since it is a one time straight-through (Six Sigma) methodology, starting with identifying a process or functional problem and ending with a change in the process, function, or supporting system.

Other attempts at systems thinking were an outgrowth of this emphasis on producing quality products (product thinking). For example, the Balanced Scorecard (BSC) approach, conceptualized in 1987. The BSC was attempting to look at all dimensions of an organization by measuring multiple dimensions. It uses four dimensions to measure the performance of an organization and its management instead of measure the performance of an organization on more than the financial dimension. The Software Engineering Institute (SEI) built layer four, measurement, into the Capability Maturity Model for the same purpose.

In 1990, Michael Hammer began to create the discipline of Business Process Reengineering (BPR), followed by others like Tom Peters and Peter Drucker. This discipline treats the process as a process rather than as a series of functions. It is more like the Manufacturing Engineering discipline that seeks to optimize the processes with respect to cost efficiency per unit produced. For example, Michael Hammer would say that no matter size of an organization, it’s books can closed at the end of each day, not by spending two weeks at the end of the business or fiscal year “closing the books”. Or in another example, you can tell if an organization is focused on functions or processes by its budgeting model; either a process budgeting model or a functional budgeting model.

Like the Lean concept, and to some degree, ISO 9000, ITIL,and other standards, BPR does little to link to the organization’s Vision and Mission, as Jim Collins discusses in Built to Last (2002); or as he puts the BHAG, BIG HARRY AUDACIOUS GOALS. Instead, it focuses on cost efficiency (cost reduction through reducing both waste and organizational friction, one type of waste) within the business processes.

System Architecture Thinking and the Enterprise ArchitectIn 1999, work started on the Federal Enterprise Architecture Framework (FEAF) with a very traditional four layer architecture, business process, application, data, and technology. In 2001, a new version was released that included a fifth layer, the Performance Reference Model. For the first time the FEAF links all of the organization’s processes and enabling and supporting technology to its Vision and Mission. Further, if properly implemented, it can do this in a measurable manner (see my post Transformation Benefits Measurement, the Political and Technical Hard Part of Mission Alignment and Enterprise Architecture). This enables the Enterprise Architect to perform in the role that I have discussed in several of my posts and in comments in some of the groups in the LinkedIn site. These are decision support for investment decision-making processes and support for the governance and policy management processes (additionally, I see the Enterprise Architect as responsible for the Technology Change Management process for reasons that I discuss in Technology Change Management: An Activity of the Enterprise Architect). Further, successful organizations will use a Short Cycle investment decision-making (Mission Alignment) and implementing (Mission Implementation) process, for reasons discussed above. [Sidebar: there may be a limited number of successful project that need multiple years to complete. For example, large buildings, new designs for an airframe of aircraft, large ships–all very large construction effort, while some like construction or reconstruction of highways can be short cycle efforts–much to the joy of the motoring public.] The Enterprise Architect (EA), using the OODA Loop pattern, has continuous measured feedback as the change operates. Given that there will be a learning curve for all changes in operation; still, the Enterprise Architect is in the best position to provide guidance as to what worked and what other changes are needed to further optimize the organization’s processes and tooling to support its Mission and Vision. Additionally, because the EA is accountable for the Enterprise Architecture, he or she has the perspective of entire organization’s processes and tooling, rather than just a portion and is in the position to make recommendations on investments and governance.

System Architecture Thinking and the Systems Engineer and System ArchitectOne consequence of the short-cycle processes is that all short-cycle efforts are “level of effort” based. Level of Effort is a development or transformation effort is executed using a given a set level of resources over the entire period of the effort. Whereas in a waterfall-like “Big Bang” process scheduling the resources to support the effort is a key responsibility of the effort (and the PM), with the short-cycle the work must fit into the cycles. With the waterfall, the PM could schedule all of the work by adding resources or lengthened the time required to design, develop, implement and verify; now the work must fit into a given time and level of resource. Now, the PM can’t do either because they are held constant. If, in order to make an agile process, we use axiom that “Not all of the requirements are known at the start of the effort”, rather than the other way around, then any scheduling of work beyond the current cycle is an exercise in futility because as the number of known requirements increases, some of the previously unknown requirements will be of higher priority for the customer than any of the known requirements. Since a Mission of a supplier is to satisfy the needs of the customer, each cycle will work on the highest priority requirements, which means that some or many of the known requirements will be “below the line” on each cycle. The final consequence of this is that some of the originally known requirements will not be met by the final product. Instead, the customer will get the organization’s highest priority requirements fulfilled. I have found that when this is the case, the customer is more delighted with the product, takes greater ownership of the product, and finds resources to continue with the lower priority requirements.

On the other hand, not fulfilling all on the initially known requirements (some of which were not real requirements, some of which contradicted other requirements) gives PMs, the contracts department, accountants, lawyers, and other finance engineers the pip! Culturally,generally they are incapable of dealing in this manner; their functions are not built to handle it when the process is introduced. Fundamentally making the assumption that “Not all the requirements are known up front” makes the short-cycle development process Systems Requirements-based instead of Programmatic Requirements-based. This is the major stumbling block to the introduction of this type of process because it emphasizes the roles of the Systems Engineer and System Architect and de-emphasizes the role of the PM.

The customer too, must become accustomed to the concept, though in my experience on many efforts, the once the customer unders the customer’s role in this process, the customer becomes delighted. I had one very high-level customer that said after the second iteration through one project, “I would never do any IT effort again that does not use this process.”