With such background discussions in mind, this thread focuses on how our socio-economic system may likely evolve/change to adapt to the Anthropocene (including but not limited to climate change) through about 2100 . In this regards I plan to discuss: (1) representative background on how AI and cyborg technology may likely interact with the modern human mind-body dynamic to change in new ways; and (2) representative examples of anthropological findings illustrate how evolution has shaped the mind-body dynamic in modern humans;(3) representative background on how morality and "mindfulness" techniques are already impacting modern business practices, information theory, psychology, sociology, and neuroscience of well-being can foster a thriving, resilient, and compassionate global socio-economic system.

As I believe that the Anthropocene is best characterized by the Information Age that our socio-economic system has moving into, I will begin this thread with an initial linked article about how the future rich (this century or two) may use AI & cyborg technology to assume behavior that exceed current social expectations:

http://rt.com/uk/263133-rich-human-god-cyborgs/Extract: "Rich people living 200 years from now are likely to become “god-like” immortal cyborgs, while the poor will die out, an historian has claimed.Yuval Noah Harari, a professor at the Hebrew University of Jerusalem, said the merger of humans and machines would be the greatest evolution since the appearance of life. He added the greatest minds in computer engineering already believe death is a mere technological problem with a solution. Harari said advances in technology will enable humans to become god-like creatures, as different from today’s humans as chimpanzees are from us.…During his talk, Harari said humans are programmed to be dissatisfied in life. Even when humans gain pleasure and achievements, they want more. “I think it is likely in the next 200 years or so Homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs, part organic part non-organic,” he told the audience. “It will be the greatest evolution in biology since the appearance of life. Nothing really has changed in four billion years biologically speaking. But we will be as different from today’s humans as chimps are now from us.” However, Harari said only the wealthiest would benefit from ‘cyborg’ technology, making the gap between rich and poor in society even wider. In the future the rich may be immortal while the poor would die out, he said. Harari’s book argues that humans are successful as a species because of their imagination and ability to create fictions. He cites religion, money and the concept of human rights as “fictions” which hold society together but have no basis in nature. Speaking at Hay, he said: “God is extremely important because without religious myth you can’t create society. Religion is the most important invention of humans. As long as humans believed they relied more and more on these gods they were controllable. “But what we see in the last few centuries is humans becoming more powerful and they no longer need the crutches of the gods. Now we are saying we do not need God, just technology. “The most interesting place in the world from a religious perspective is not the Middle East, it’s Silicon Valley where they are developing a techno-religion. They believe even death is just a technological problem to be solved,” he added. "

Starting this thread with such a linked article may seem like an artificial "deus ex machine" device; however, I sincerely believe that developments in information theory will have a major impact on our coming socio-economic system for good (discussed in this thread) and bad (see the Anthropogenic Existential Risk thread here: http://forum.arctic-sea-ice.net/index.php/topic,1307.0.html )

https://en.wikipedia.org/wiki/Deus_ex_machinaExtract: "Deus ex machina: meaning "god from the machine". The term has evolved to mean a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability or object."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

"A central aspect of consciousness is the ability to look ahead, the capability we call "foresight." It is the ability to plan, and in social terms to outline a scenario of what is likely going to happen, or what might happen, in social interactions that have not yet taken place . . . It is a system whereby we improve our chances of doing those things that will represent our own best interests . . . I suggest that "free will" is our apparent ability to choose and act upon whichever of those seem most useful or appropriate, and our insistence upon the idea that such choices are our own." Quote by Richard D. Alexander

In my opinion, Siddhartha Gautama (Sanskrit; or in Pali: Siddhattha Gotama) taught that "free will" allows individuals to choose from the cacophony of modelled options that the mind creates; which in un-enlightened individuals generates the illusion that humans are serving their own best interests by feeding-back into this mental construct. Thus in my opinion, the first noble truth is that feeding self-interest back into this mental construct creates suffering when the feedback causes the mental construct to diverge from reality; however with proper insight "free will" can allow an individual to choose from the cacophony of modelled options actions that will improve the reality of the "common good".

In order to cope with the complexity and chaos of reality, the human mind evolved a high-degree of redundancy within a pattern recognizing hierarchical mental structure that uses inductive reasoning looking both forward and backward through the hierarchy to produce relatively linear projections that are frequently accepted by the mind as deductive proof of the ego's power & foresight. In this sense the mind does not hesitate to confabulate any a posteriori that makes sense to our a priori mental construct thus creating an illusion of self-interest that is "seeming parted, but a union in partition" (in this same sense, Gautama teaching that there is no "self" can be taken to indicate that the illusion of self is really a mindless aggregation of the ever changing cacophony of mental model options in the same way that an orchestra is singular).Inductive reasoning, also known as induction, is a kind of reasoning that constructs or evaluates propositions (a priori) that are abstractions of observations of individual instances. Inductive reasoning contrasts with deductive reasoning in that a general conclusion is arrived at by specific examples. Inductive reasoning allows for the possibility that the conclusion be false, even where all of the premises are true.

Arthur Schopenhauser said: "… everyone believes himself a priori to be perfectly free, even in his individual actions, and thinks that at every moment he can commence another manner of life . . . But a posteriori, through experience, he finds to his astonishment that he is not free, but subjected to necessity, that in spite of all his resolutions and reflections he does not change his conduct and that from the from the beginning of his life to the end of it, he must carry out the very character which he himself condemns."

As I have previously posted: C. D. Broad is often misquoted as saying: "induction is the glory of science and the scandal of philosophy"; however, the actual quote was: "May we venture to hope that when Bacon's next centenary is celebrated the great work which he set going will be completed; and that Inductive Reasoning, which has long been the glory of Science, will have ceased to be the scandal of Philosophy? " Broad, C.D. (1926), "The philosophy of Francis Bacon: An address delivered at Cambridge on the occasion of the Bacon tercentenary, 5 October, 1926", Cambridge: University Press, p. 67.

In this sense modern decision makers in the international socioeconomic system choose from a cacophony of information presented by the market, society, advisors and one's own mental constructs to "… act upon whichever of those most useful or appropriate, and our insistence upon the idea that such choices are our own. (partial quote per Richard D. Alexander )"

Traditionally, top modern decision makers make their socio-economic choices "on/at the margin" where uncertainty is maximized; which, generally results in a near balance between "Cad-like" and "Dad-like" behavior (i.e. ".. the scandal of Philosophy), that oscillates with the ever changing circumstances reminiscent of a Tao Yin-Yang symbol of undifferentiated unity. In a sustainable socio-economic system (such as existed prior to the beginning of the Holocene [which I propose be re-named the Anthropocene, see the "Early Anthropocene" thread], note that the megafauna extinction that I believe triggered the beginning of the Anthropocene/Holocene added pressure on mankind to enter the Agricultural Age that eventually [by between 5000 to 3000 years before present] allowed for sufficient wealth accumulation to facilitate organized exploitation of other people, and of nature, in a non-sustainable fashion), individuals consider the greater good of the system; however, today Homo Economicus is in a crisis of scandalous "Cad-like" self-serving (whether with regard to externalizing GHG, pollution, effort/work, responsibility, risk, etc.) that requires our current socio-economic system to either adapt to the currently changing conditions or potentially cease to exist (ala existential risk).

In my next post I plan to consider our increasingly Homo Economicus based socio-economic system as a kind of fragile Humpty Dumpty stilling on a wall (which is also a scientific analogy for information entropy), and I further consider how Information Theory (including Artificial Intelligence) can be used to minimize the fragility (and to decrease the entropy) of our current socio-economic system, by building on the already highly evolve information processing structure of the human mind.

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

"Humpty Dumpty sat on a wall,Humpty Dumpty had a great fall.All the king's horses and all the king's menCouldn't put Humpty together again."

For perspective I compare our modern global socio-economic system to Humpty Dumpty, as our complex anthropogenic system has become very fragile during the Anthropocene Epoch, furthermore, Humpty Dumpty can be taken as an analogy of information entropy (see the following Wikipedia link).

https://en.wikipedia.org/wiki/Humpty_DumptyExtract: "Humpty Dumpty has been used to demonstrate the second law of thermodynamics. The law describes a process known as entropy, a measure of the number of specific ways in which a system may be arranged, often taken to be a measure of "disorder". The higher the entropy, the higher the disorder. After his fall, and subsequent shattering, the inability to put him together again is representative of this principle, as it would be highly unlikely, though not impossible, to return him to his earlier state of lower entropy, as the entropy of an isolated system never decreases."

As an example of how information entropy can be increased with a system that defines term in self-serving manners, consider both the following quote by Lewis Carroll and also the following linked Wikipedia extracts about entropy, and entropic uncertainty, in Information Theory:“When I use a word,” Humpty Dumpty said, in rather a scornful tone,“it means just what I choose it to mean – neither more nor less.”—Lewis Carroll, Through the Looking Glass—

https://en.wikipedia.org/wiki/Entropy#Information_theoryI thought of calling it "information", but the word was overly used, so I decided to call it "uncertainty". [...] Von Neumann told me, "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage."Conversation between Claude Shannon and John von Neumann regarding what name to give to the attenuation in phone-line signals

When viewed in terms of information theory, the entropy state function is simply the amount of information (in the Shannon sense) that would be needed to specify the full microstate of the system. This is left unspecified by the macroscopic description.In information theory, entropy is the measure of the amount of information that is missing before reception and is sometimes referred to as Shannon entropy. Shannon entropy is a broad and general concept which finds applications in information theory as well as thermodynamics. It was originally devised by Claude Shannon in 1948 to study the amount of information in a transmitted message. The definition of the information entropy is, however, quite general, and is expressed in terms of a discrete set of probabilities pi so that ..."

formula not provided

https://en.wikipedia.org/wiki/Entropic_uncertaintyExtract: "In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations."

https://en.wikipedia.org/wiki/Entropy_(information_theory)Extract: " In information theory, entropy (more specifically, Shannon entropy) is the expected value (average) of the information contained in each message received. Here, message stands for an event, sample or character drawn from a distribution or data stream. Entropy thus characterizes our uncertainty about our source of information, and increases for more sources of greater randomness. The source is also characterized by the probability distribution of the samples drawn from it. The idea here is that the less likely an event is, the more information it provides when it occurs. For some other reasons (explained below) it makes sense to define information as the negative of the logarithm of the probability distribution. The probability distribution of the events, coupled with the information amount of every event, forms a random variable whose average (also termed expected value) is the average amount of information, a.k.a. entropy, generated by this distribution. Units of entropy are the shannon, nat, or hartley, depending on the base of the logarithm used to define it, though the shannon is commonly referred to as a bit."….A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:

formula not provided

where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:

formula not provided

where i is a state (certain preceding characters) and is the probability of j given i as the previous character.For a second order Markov source, the entropy rate is

Just as Information Theory has allowed us to adequately control information entropy to facilitate signal transmission through the Cloud & Internet to our smart phones, so development in AI can help to limit entropic uncertainty (note that all complex socio-economic systems are founded on reliability or a reduction in uncertainty) to allow our current fragile socio-economic system to adapt into a more sustainable form (note regarding the risks of AI, see the discussion in the "Anthropogenic Existential Risk" thread). To demonstrate that AI is not just some "deus ex machine" plot device, I present the following linked articles (& associate extracts & four plots from Ray Kurzweil) that illustrate the rapid pace of AI research in the "high tech" industry:

Extract: "France has many hidden gems, and Facebook is well-aware of that. The company is building a new artificial intelligence research team in Paris in order to work on ambitious futuristic projects. The new team will work closely with existing Facebook AI Research (FAIR) teams in Menlo Park and New York on image recognition, natural language processing, speech recognition, machine learning, live translating tools and more."

http://www.washingtonpost.com/opinions/five-myths-about-google/2015/03/20/a22bb30e-ce61-11e4-8a46-b1dc9be5a8ff_story.htmlExtract: "Google has built a research group around AI and machine learning, and it even hired renowned AI guru Ray Kurzweil, who believes that by 2045 humans will merge with computers in what’s known as “the Singularity.” Google’s recent acquisitions speak to its intentions: British company DeepMind, one of the most advanced AI development shops in the world, plus eight of the world’s best robotics companies. Nobody knows what Google will do with all these robots and AI software, but its ambitions certainly go well beyond self-driving cars.This work takes place inside Google X, the company’s top-secret research lab. A few hundred people work there, a tiny but potent slice of Google’s workforce of 53,600. Google isn’t alone in the quest to develop AI (Facebook also has an AI research team), but it’s one of the few organizations with the brainpower and financial resources to make true artificial intelligence a reality. Plus, AI is in its blood: Google co-founder and chief executive Larry Page is the son of renowned AI pioneer Carl Page, and he’s personally funding a research project to reverse-engineer the brain of a worm. “Every time I talk about Google’s future with Larry Page, he argues that it will become an artificial-intelligence” company, tech venture capitalist Steve Jurvetson has said ."

http://singularityhub.com/2015/01/26/ray-kurzweils-mind-boggling-predictions-for-the-next-25-years/Extract:"Ray’s predictions for the next 25 yearsThe above represent only a few of the predictions Ray has made.While he hasn’t been precisely right, to the exact year, his track record is stunningly good.Here are some of my favorite of Ray’s predictions for the next 25+ years.If you are an entrepreneur, you need to be thinking about these. Specifically, how are you going to capitalize on them when they happen? How will they affect your business?By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power (roughly the same as the human brain) will cost about $1,000.By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.I want to make an important point.It’s not about the predictions.It’s about what the predictions represent.Ray’s predictions are a byproduct of his (and my) understanding of the power of Moore’s Law, more specifically Ray’s “Law of Accelerating Returns” and of exponential technologies.These technologies follow an exponential growth curve based on the principle that the computing power that enables them doubles every two years."

The linked article indicates that Elon Musk believes (as a personal friend of Larry Page & Sergey Brin) that Google is making great strides on implementing AI per Ray Kurzweil's estimated schedule.

The following Journal of Machine Learning Research paper presents an example of counterfactual reasoning and learning systems used by Microsoft's BING search engine "… to leverage causal inference to better understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system."

Abstract: "This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select the changes that would have improved the system performance. This work is illustrated by experiments on the ad placement system associated with the Bing search engine."

As a follow-on to my last post, I provide the following additional links, extracts and two more images as examples of how fast deep learning is transforming AI into a practical tool for the "high tech" industry:

Extract: "Since the turn of the 21st century, nanotechnology, biotechnology, information technology, and cognitive sciences (NBIC) are converging and changing the elementary building blocks of matter and machines, our bodies and brains, and even our societies and environment. As John F. Kennedy said, it seems that we stand on the edge of a New Frontier. As our understanding and control over these elements dramatically increases in the coming decades, our governments will be confronted with an array of crucial ethical questions and policy choices.Our world is undergoing a technological explosion. According to world famous futurologists such as Ray Kurzweil, Hans Moravec and Nick Bostrom, the linear evolution rhythm is tipped over into an exponential rhythm, ushering in a new chapter in the history of mankind and intelligent life.The NBIC convergence has already shown its serious consequences: John McAfee and Eric Brynjolfsson of MIT have demonstrated how increasing automation across factory shop floors resulted in the impoverishment of less-qualified workers. Tyler Cowen of George Mason University similarly argues that, unlike earlier spells of technological disruptions, recent developments in artificial intelligence and robotics may render large swaths of the population permanently unemployable."

As a follow-on to my last post I provide the following links to high-tech industry AI development and a Google image created by their deep neutral net system from white noise as example of AI dreaming.

http://www.digitaltrends.com/cool-tech/get-your-head-into-the-cloud-humans-will-be-artifically-intelligent-by-2030-ray-kurzweil-predicts/Extract: "In fact, according to Kurzweil, humans will be artificially intelligent by 2030, making us half-homo sapien, half-computer. On June 3 at the Exponential Finance conference, Kurzweil predicted, “Our thinking then will be a hybrid of biological and non-biological thinking. We’re going to gradually merge and enhance ourselves. In my view, that’s the nature of being human – we transcend our limitations.”In just 15 years, Kurzweil believes, the human brain will become a hybrid of biology and technology, and we will “put gateways to the cloud in our brains.” And as the cloud becomes more and more advanced and is able to store increasing amounts of information, so too will our brains. By the late 2030’s or early 2040’s, Kurzweil said, the majority of brain function, at least in terms of information processing and thought processes, will be non-biological."

http://www.businessinsider.com/ray-kurzweil-thinks-well-all-be-cyborgs-by-2030-2015-6Extract: "Kurzweil predicted that, in just 15 years, we could choose to become part-human, part-computer. With the help of tiny nanobots made of DNA, he says our mind will be able to connect to the cloud.DNA nanobots (yes, such things exist) are less likely to be rejected by the body's immune system than traditional hardware, since they're made of biological molecules. Researchers have already used them to target and destroy cancer cells as well as store data.But Kurzweil thinks that this technology could eventually send emails and videos directly to the brain, or even allow us to back up our thoughts and memories."

http://www.iol.co.za/business/international/musk-funds-research-to-keep-smart-machines-from-killing-us-1.1879199#.VZUCO6Pn9MsExtract: "The Musk-backed Future of Life Institute said yesterday it had awarded about $7 million (R85.4m) to teams around the world to look at the risks and opportunities posed by the development of AI. The grants were funded by part of Musk’s $10m donation to the group in January and $1.2m from the Open Philanthropy Project. “There is this race going on between the growing power of the technology and the growing wisdom with which we manage it,” said Max Tegmark, the president of the Boston-based institute. “So far all the investments have been about making the systems more intelligent, this is the first time there’s been an investment in the other.”"

As in subsequent posts I plan to discuss how facial micro-expressions have evolved to serve as a kind of mind-body guarantor of an individual's mental intentions that allows for reliable cooperative socio-economic interactions, I provide the following first linked reference about how computers can use facial expressions within AI contexts for pattern recognition that could be used to help adapt our current socio-economic system into a more sustainable cooperative system (with reduced entropic uncertainty), and the second linked reference about Paul Ekman's work on the Facial Action Coding System:

Abstract: "Automatic facial expression analysis aims to analyse human facial expressions and classify them into discrete categories. Methods based on existing work are reliant on extracting information from video sequences and employ either some form of subjective thresholding of dynamic information or attempt to identify the particular individual frames in which the expected behaviour occurs. These methods are inefficient as they require either additional subjective information, tedious manual work or fail to take advantage of the information contained in the dynamic signature from facial movements for the task of expression recognition.In this paper, a novel framework is proposed for automatic facial expression analysis which extracts salient information from video sequences but does not rely on any subjective preprocessing or additional user-supplied information to select frames with peak expressions. The experimental framework demonstrates that the proposed method outperforms static expression recognition systems in terms of recognition rate. The approach does not rely on action units (AUs), and therefore, eliminates errors which are otherwise propagated to the final result due to incorrect initial identification of AUs. The proposed framework explores a parametric space of over 300 dimensions and is tested with six state-of-the-art machine learning techniques. Such robust and extensive experimentation provides an important foundation for the assessment of the performance for future work. A further contribution of the paper is offered in the form of a user study. This was conducted in order to investigate the correlation between human cognitive systems and the proposed framework for the understanding of human emotion classification and the reliability of public databases."

Extract: "Like every company in this field, Affectiva relies on the work of Paul Ekman, a research psychologist who, beginning in the sixties, built a convincing body of evidence that there are at least six universal human emotions, expressed by everyone’s face identically, regardless of gender, age, or cultural upbringing. Ekman worked to decode these expressions, breaking them down into combinations of forty-six individual movements, called “action units.” From this work, he compiled the Facial Action Coding System, or FACS—a five-hundred-page taxonomy of facial movements. It has been in use for decades by academics and professionals, from computer animators to police officers interested in the subtleties of deception.Ekman has had critics, among them social scientists who argue that context plays a far greater role in reading emotions than his theory allows. But context-blind computers appear to support his conclusions. By scanning facial action units, computers can now outperform most people in distinguishing social smiles from those triggered by spontaneous joy, and in differentiating between faked pain and genuine pain. They can determine if a patient is depressed. Operating with unflagging attention, they can register expressions so fleeting that they are unknown even to the person making them. Marian Bartlett, a researcher at the University of California, San Diego, and the lead scientist at Emotient, once ran footage of her family watching TV through her software. During a moment of slapstick violence, her daughter, for a single frame, exhibited ferocious anger, which faded into surprise, then laughter. Her daughter was unaware of the moment of displeasure—but the computer had noticed. Recently, in a peer-reviewed study, Bartlett’s colleagues demonstrated that computers scanning for “micro-expressions” could predict when people would turn down a financial offer: a flash of disgust indicated that the offer was considered unfair, and a flash of anger prefigured the rejection."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

In order to transition from my discussion of Information Theory, AI (including machine consciousness), and the high tech industry, I note that all of these developments characterize the Information Age; which is the latest of a series of "Technological Eras" spanning back to the Neolithic Revolution at the beginning of the Holocene/Anthropocene (see the following Wikipedia link & extract).https://en.wikipedia.org/wiki/Information_Age

And among these various Technological Eras, I now draw attention to Andrew Sherratt's model of a proposed "Secondary Products Revolution" (circa the 4th-3rd millennia); which allowed mankind to accumulate sufficient secondary products such as: milk, cheese, wool, & transportation, that in turn allowed mankind's population to expand sufficiently to allow for the development of a market based economy (see the following Wikipedia link, extract & the first attached image):

https://en.wikipedia.org/wiki/Secondary_products_revolutionExtract: " Andrew Sherratt's model of a secondary products revolution involved a widespread and broadly contemporaneous set of innovations in Old World farming. The use of domestic animals for primary carcass products (meat) was broadened from the 4th-3rd millennia BC to include exploitation for renewable 'secondary' products: milk, wool, traction, riding and pack transport. The SPR model incorporates two key elements:1. the discovery and diffusion of secondary products innovations2. their systematic application, leading to a transformation of European economy and societyMany of these innovations first appeared in the Near East during the fourth millennium BC and spread to Europe and the rest of Asia soon afterwards. They appeared in Europe by the beginning of the third millennium BC. These innovations became available in Europe due to the westwards diffusion of new species (horse, donkey), breeds (e.g. woolly sheep), technology (wheel, ard) and technological knowledge (e.g. ploughing). Their adoption can be understood in terms of pastoralism, plough agriculture and animal-based transport facilitating marginal agricultural colonisation and settlement nucleation. Ultimately it was revolutionary in terms of both origins and consequences. However, both the dating and significance of the archaeological evidence cited by Sherratt (and thus the validity of the model) have been questioned by several archaeologists. The dangers of dating the innovations on the basis of evidence such as iconography and waterlogged organic remains with restricted chronological and geographical availability have been pointed out. Sherratt has himself acknowledged that such dates provide a terminus ante quem for the invention of milking and ploughing.Direct evidence for how domestic animals were exploited in later prehistoric Europe has grown substantially, in quantity and diversity, since 1981. Initially the concepts of the SPR were tested by analysing the appearance of certain artefact types (e.g. ploughs, wheeled vehicles). By the middle 1980s the most common means of testing the model derived from the more ubiquitous faunal (zooarchaeological) assemblages, through which mortality patterns, herd management and traction-related arthropathies were utilized to confirm or reject the SPR model. Many zooarchaeological studies in both the Near East and Europe have confirmed the veracity of the model. However the detection of milk residues in ceramic vessels is now considered the most promising means of detecting the origins of milking. Discovery of such residues has pushed back the earliest date for milking into the Neolithic. A study of more than 2,200 pottery vessels from sites in the Near East and Southeastern Europe indicated that milking had its origins in northwestern Anatolia. The lowland, coastal region around the Sea of Marmara favoured cattle-keeping. Pottery from these sites dating from 6500–5000 BC showed milk being processed into dairy products. Milk residues had already been found in vessels from the British Neolithic, but farming arrived in Britain late (c. 4000 BC). The seeming contradiction between the zooarchaeological and residue studies appears to be a matter of scale. The residues indicate that milking may have played a role in domestic animal exploitation from the later Neolithic. The zooarchaeological studies indicate that there was a massive change in the scale of such production strategies during the Chalcolithic and Bronze Age."

I present this background information about Technology Eras as in subsequent posts I plan to discuss the use of new paleo-genetic information as support for the interpretation of Darwin's theory of "Natural Selection" as supporting the concept that together with self-serving (Homo Economicus-like) behavior, mankind has a more than equal balance of propensity towards empathy, cooperation and interest in the common good and that Paul Ekman's micro-expressions were used by mankind to facilitate socio-economic activity that exploded during the Holocene/Anthropocene.

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

The first linked article discusses evidence that as mankind left the hunter-gather stage (& entered the agricultural stage that allowed for higher population densities) that well-connected social networks blossomed as more aggressive male (genetic) behavior diminished. Another recent study (see the second link, & that attached image) found that at this same transitional stage the average ratio of genes passed down to subsequence generations indicated that on average the successfully reproductive man had 17 mates; which is a clear sign that social-networks (including organized farming, mining, trades & crafts) allowed more social males (whether cad-types or dad-types) to accumulate sufficient wealth to maximize their reproductive success. It would seem logical that natural selection will further promote more social (empathic) behavior as we transition from the industrial age (dominated by fossil fuels) to the information age (which in my opinion will facilitate sustainable behavior with less externalization of dis-benefits like pollution and GHGs).

Extract: "Humans are the only primate with a chin, an adaptation that reflects the emergence of complex social networks among our ancestors.

…Beginning about 80,000 years ago, and accelerating about 20,000 years later following another wave of human migration out of Africa, human faces changed. Our brow ridges became less prominent, and most parts of our faces shrank, except for the chin, which then appeared more prominent.

The hormonal changes were associated with behavioral changes. As population density increased, small bands of hunter-gatherers began to encounter one another more frequently. As the number of interactions with outsiders increased, those who were less aggressive and more empathetic toward unfamiliar people – which tended to be men with relatively lower levels of testosterone – finally began to see success, cooperating to obtain food and sharing techniques to make tools, for instance. "What we're arguing is that modern humans had an advantage at some point to have a well-connected social network, they can exchange information, and mates, more readily, there's innovation," says Franciscus in the press release, "and for that to happen, males have to tolerate each other.Over time, these original economic girlie men had more reproductive success than their more hostile and aggressive counterparts. Even greater social tolerance led to greater population density, and eventually more gracile faces came to dominate the gene pool.And it all happened, the researchers argue, because humans started developing wider social networks.…Behavioral modernity – a term used by anthropologists to mean basically everything that humans have done for the past fifty or sixty millenniums – does not have a single cause. But, if these researchers are correct, the origins of our civilization, and of our ability to improve it over time, are written on your face."

http://phys.org/news/2015-03-wealth-power-stronger-role-survival.htmlExtract: "Melissa Wilson Sayres, a leading author and assistant professor with ASU's School of Life Sciences, said, "Instead of 'survival of the fittest' in biological sense, the accumulation of wealth and power may have increased the reproductive success of a limited number of 'socially fit' males and their sons."It is widely recognized among scientists that a major bottleneck, or decrease in genetic diversity, occurred approximately 50 thousand years ago when a subset of humans left Africa and migrated across the rest of the world. Signatures of this bottleneck appear in most genes of non-African populations, whether they are inherited from both parents or, as confirmed in this study, only along the father's or mother's genetic lines."Most surprisingly to us, we detected another, male-specific, bottleneck during a period of global growth. The signal for this bottleneck dates to a time period four to eight thousand years ago, when humans in different parts of the world had become sedentary farmers," said senior author Toomas Kivisild from the Division of Biological Anthropology, University of Cambridge."

The first study indicates that by the end of the Pleistocene that mankind's facial signals uniquely qualified us to take advantage of the technological revolutions that occurred during the Holocene/Anthropocene (where mankind began to dominate nature), as indicated by the findings of the second linked research (see the attached image).

The following linked references indicate that not only did wealth allow for a genetic bottleneck of male genes in Europe about 5500 years ago, but it also allowed for large-scale displacement of one cultural group by another (certainly including by means of warfare & enslavement):

Extract: "The Bronze Age came to Europe and Asia 5000 years ago, leaving a trail of metal tools, axes, and jewelry that stretches from Siberia to Scandinavia. But was this powerful new technology an idea that spread from the Middle East to European and Asian people, or was it brought in by foreigners? Two of the largest studies of ancient DNA from Bronze Age and Iron Age people have now found that outsiders deserve the credit: Nomadic herders from the steppes of today's Russia and Ukraine brought their culture and, possibly, languages with them—and made a relatively recent and lasting imprint on the genetic makeup of Europeans and Asians.In the studies, published online today in Nature, two rival teams of geneticists analyzed the DNA from 170 individuals who lived at key archaeological sites in Europe and Asia 5000 to 3000 years ago. Both teams found strong evidence that a wave of nomadic herders known as the Yamnaya from the Pontic-Caspian, a vast steppeland stretching from the northern shores of the Black Sea and as far east as the Caspian Sea, swept into Europe sometime between 5000 and 4800 years ago; along the way, they may have brought with them Proto-Indo-European, the mysterious ancestral tongue from which all of today’s 400 Indo-European languages spring. These herders interbred with local farmers and created the Corded Ware culture of central Europe, named for the twisted cord imprint on its pottery. Their genes were passed down to northern and central Europeans living today, as one of the teams posted on a preprint server earlier this year and published today.But in a new twist, one of the studies also found that the Yamnaya headed east from their homeland in the Eurasian steppelands, moving all the way to the Altai Mountains of Siberia, where they replaced local hunter-gatherers. This means that this distinctive culture of pastoralists, who had ox-driven wagons with wheels and whose warriors rode horses, dominated much of Eurasia, from north-central Europe to central Siberia and northern Mongolia. They persisted there until as recently as 2000 years ago. “Now we see the Yamnaya is not only spreading north into Europe; they’re also spreading east, crossing the Urals, getting all the way into central Asia, all the way into the Altai, between Mongolia, China, and Siberia,” says evolutionary biologist Eske Willerslev of the University of Copenhagen, author of one study.Archaeologists have long noticed connections between the steppe cultures such as the Yamnaya and Bronze Age people to the east, who lived in the Minusinsk Basin and the Altai Mountains of southern Siberia and Mongolia about 5500 to 4500 years ago. Like the steppe people, these eastern cultures, such as the Afanasievo of the Altai, buried their high-status people in a supine flexed position, covered in ochre with animal remains in their graves, beneath mounds (or stone kurgans in the steppe). They also made pointed-based pots, censers (circular bowls on legs), and were among the first people to drive carts with wheels and tame horses. All of these traits also link them to people in central and eastern Europe, including the Yamnaya and Corded Ware people, who are thought to have spoken early Indo-European languages.In one of the new studies, Willerslev’s international team sequenced the genomes of 101 ancient people from across Europe. They found that the Yamnaya of the Samara Valley in the northern steppe of Russia were genetically indistinguishable from the Afanasievo of the Altai in the Yenesey region of southern Siberia, which confirms archaeologists’ suggestions that there was a vast migration of steppe pastoralists to the east. But unlike in Europe where the Yamnaya interbred with local farmers, the Yamnaya moving east completely replaced the local hunter-gatherers—perhaps because this region was only sparsely populated, Willerslev says.This eastern branch of the Yamnaya (or Afanasievo) persisted in central Asia and, perhaps, Mongolia and China until they themselves were replaced by fierce warriors in chariots called the Sintashta (also known as the Andronovo culture). These people from the Urals and Caucuses, who were genetically related to central Europeans, persisted in central Asia until 2000 years ago, which means that people in central Asia were actually more like Europeans than living Asians. It wasn’t until relatively recently—just 2000 years ago—that these “Caucasians” were replaced by immigrants from eastern Asia, such as the Karasuk, Mezhovskaya, and other Iron Age cultures that today make up the ancestry of people in central Asia.The implications of these genetic links between the Asian and European Bronze Age cultures are far-reaching: A few closely-linked groups from the steppe dominated a huge area from Europe to Asia and shaped major parts of the genetics of Europeans and Asians. “We now have samples all the way from Spain and the Atlantic Ocean to central Siberia,” says population geneticist Iosif Lazaridis of Harvard University and a co-author of the second study led by David Reich, also of Harvard. “The genetics of a lot of temperate Eurasia was completely unknown until a few years ago. Now it is almost completely known for this Bronze Age–Neolithic period.”Both studies found that these people brought genes for light skin and brown eyes with them, although northern hunter-gatherers already had light skin as well, Willerslev’s team found. In one surprising twist, it appears that even though the Yamnaya and these other Bronze Age cultures herded cattle, goats, and sheep, they couldn’t digest raw milk as adults. Lactose tolerance was still rare among Europeans and Asians at the end of the Bronze Age, just 2000 years ago. “The lack of lactose tolerance is very surprising, because most people would have assumed that the ability to drink raw milk that you see in present-day Europeans was selected for and fixed at least by the beginning of Bronze Age,” Willerslev says.This massive migration from the steppe also may have spread the Indo-European languages that have been spoken across Europe and in central and southern Asia since the beginning of recorded history, including Italic, Germanic, Slavic, Hindi, and Tocharian languages, among others. If the genetic affinities do mirror linguistic families, this would be strong evidence against a rival hypothesis that farmers from the Middle East spread early Indo-European languages.Although geneticists think the correlation is remarkably strong, the issue is far from resolved for linguists, who are following the new ancient DNA work closely. “There is a real sense that after more than 2 centuries of linguistics trying to solve the Indo-European question, it's ancient DNA that is suddenly moving us fast toward a possible resolution,” says linguist Paul Heggarty of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. But it is not enough just to have data from northern Eurasia, where the Yamnaya’s movements may reflect only one part of the spread of Indo-European languages. Heggarty adds: “We need key data from the majority of the Indo-European-speaking world in the Mediterranean and south of the Black Sea-Caspian-Himalayas.” Other researchers say that the combined power of the two studies shows that it was people—not just pots or ideas—that spread the Bronze Age culture and genes. The studies “provide additional evidence for a mass migration at the end of the Neolithic from the Pontic steppe region into central Europe,” says paleogeneticist Johannes Krause of the Max Planck Institute for the Science of Human History in Jena, Germany."

Extract: "Despite their allegiances to 47 different nations, 87 ethnic groups, and countless football teams, Europeans have a lot in common. Most speak closely related languages that are members of the great Indo-European language family. A new study uses ancient DNA to suggest that a massive migration of herders from the east shaped the genomes of most living Europeans—and that these immigrants may have been the source of Proto-Indo-European (PIE), the mysterious ancestral tongue from which the more than 400 Indo-European languages sprang. Based on DNA gathered from dozens of ancient skeletons, the study, described last week in a preprint posted on the bioRxiv server, reveals when and where different groups of people arrived in Europe and interbred with each other. One surprise is that a migration of herders from the steppes of today's Russia and Ukraine about 4500 years ago significantly shifted the genetic makeup of today's Europeans. Many living Europeans retain traces of this influx, which the authors link to an ancient culture of steppe herders known as the Yamnaya. The team behind the study further suggests that the Yamnaya people spoke either PIE or an early form of Indo-European language and brought it to central Europe, coming down on one side of a long-standing debate about the origins of PIE. That idea is supported by a second paper published this week in Language, which uses changes in words instead of genes to claim that the steppelands were the likely homeland for the origins of many Indo-European languages. But some researchers aren't convinced and say the genetic data aren't strong enough to connect ancient people known by their DNA to any particular language.

Abstract: "The proportion of Europeans descending from Neolithic farmers ~10 thousand years ago (KYA) or Palaeolithic hunter-gatherers has been much debated. The male-specific region of the Y chromosome (MSY) has been widely applied to this question, but unbiased estimates of diversity and time depth have been lacking. Here we show that European patrilineages underwent a recent continent-wide expansion. Resequencing of 3.7 Mb of MSY DNA in 334 males, comprising 17 European and Middle Eastern populations, defines a phylogeny containing 5,996 single-nucleotide polymorphisms. Dating indicates that three major lineages (I1, R1a and R1b), accounting for 64% of our sample, have very recent coalescent times, ranging between 3.5 and 7.3 KYA. A continuous swathe of 13/17 populations share similar histories featuring a demographic expansion starting ~2.1–4.2 KYA. Our results are compatible with ancient MSY DNA data, and contrast with data on mitochondrial DNA, indicating a widespread male-specific phenomenon that focuses interest on the social structure of Bronze Age Europe."

See also:http://www.csmonitor.com/Science/2015/0522/How-a-few-Bronze-Age-forefathers-gave-rise-to-most-of-Europe-s-menExtract: "New genetic research suggests that widespread population growth occurred in Europe 3,300 years ago – far more recently than previously thought. And according to a study published Tuesday in Nature Communications, only a small number of men contributed to that growth.…In a separate study published by Genome Research, scientists found a bottleneck in the genetic diversity of male lineages across five continents. That decline occurred somewhere between four and eight thousand years ago. Researchers from Arizona State University and the University of Cambridge attributed the shift to cultural changes – wealth and power became more important than biological fitness in matters of reproduction. As a result, roughly 17 women reproduced for every one man.In some ways, these findings aren’t so surprising. Agricultural societies center around land and property. The more land and property you have, the more wealthy and powerful you can become. Batini and her colleagues are slowly unraveling a complex and intertwining genetic history in Europe, but their work also touches on loftier human concepts, namely, social stratification and human attraction to power."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Now that I have established that modern people have both Cad-like and Dad-like genetic propensities, and which behavior is expressed more is a function of not only wealth/power but also by socio-economic guarantors (such as facial micro-expressions) that promote group cohesion; I would like to turn to modern psychology findings about human nature that will support not only improved future socio-economic interactions but will also allow for faster and more beneficial programming of AI and of human (cyborg) interaction with AI (in the coming decades):

I begin by referencing the Book "Born to be Good: The Science of a Meaningful Life" by Dacher Keltner, W.W. Norton & Co., 2009

Extract: "…. Dacher Keltner investigates an unanswered question of human evolution: If humans are hardwired to lead lives that are “nasty, brutish, and short,” why have we evolved with positive emotions like gratitude, amusement, awe, and compassion that promote ethical action and cooperative societies? Illustrated with more than fifty photographs of human emotions, Born to Be Good takes us on a journey through scientific discovery, personal narrative, and Eastern philosophy. Positive emotions, Keltner finds, lie at the core of human nature and shape our everyday behavior―and they just may be the key to understanding how we can live our lives better."

Extract: "Leading scientists and science writers reflect on the life-changing, perspective-changing, new science of human goodness.In these pages you will hear from Steven Pinker, who asks, “Why is there peace?”; Robert Sapolsky, who examines violence among primates; Paul Ekman, who talks with the Dalai Lama about global compassion; Daniel Goleman, who proposes “constructive anger”; and many others. Led by renowned psychologist Dacher Keltner, the Greater Good Science Center, based at the University of California in Berkeley, has been at the forefront of the positive psychology movement, making discoveries about how and why people do good. Four times a year the center publishes its findings with essays on forgiveness, moral inspiration, and everyday ethics in Greater Good magazine. The best of these writings are collected here for the first time.

A collection of personal stories and empirical research, The Compassionate Instinct will make you think not only about what it means to be happy and fulfilled but also about what it means to lead an ethical and compassionate life."

Extract: "The Greater Good Science Center studies the psychology, sociology, and neuroscience of well-being, and teaches skills that foster a thriving, resilient, and compassionate society.Based at the University of California, Berkeley, the GGSC is unique in its commitment to both science and practice: not only do we sponsor groundbreaking scientific research into social and emotional well-being, we help people apply this research to their personal and professional lives. Since 2001, we have been at the fore of a new scientific movement to explore the roots of happy and compassionate individuals, strong social bonds, and altruistic behavior—the science of a meaningful life. And we have been without peer in our award-winning efforts to translate and disseminate this science to the public."

The following links indicate that mindfulness practice is rapidly gaining traction in both modern psychology & business practice (Apple, Procter & Gamble, General Mills):

http://en.wikipedia.org/wiki/MindfulnessExtract: "Mindfulness is "the intentional, accepting and non-judgemental focus of one's attention on the emotions, thoughts and sensations occurring in the present moment", which can be trained by meditational practices derived from Buddhist anapanasati. The term "mindfulness" is derived from the Pali-term sati, "mindfulness", which is an essential element of Buddhist practice, including vipassana, satipaṭṭhāna and anapanasati."

Abstract: "In this study, the performance of Sevcik’s algorithm that calculates the fractal dimension and permutation entropy as discriminants to detect calming and insight meditation in electroencephalographic (EEG) signals was assessed. The proposed methods were applied to EEG recordings from meditators practicing insight meditation and calming meditation before as well as during both types of meditation. Analysis was conducted using statistical hypothesis testing to determine the validity of the proposed meditation-identifying techniques. For both types of meditation, there was a statistically significant reduction in the permutation entropy. This result can be explained by the increased EEG synchronization, which is repeatedly observed in the course of meditation. In contrast, the fractal dimension (FD) was significantly increased during calming meditation, but during insight meditation, no statistically significant change was detected. Increased FD during meditation can be interpreted as an increase in self-similarity of EEG signals during self-organisation of hierarchical structure oscillators in the brain. Our results indicate that fractal dimension and permutation entropy could be used as parameters to detect both types of meditation. The permutation entropy is advantageous compared with the fractal dimension because it does not require a stationary signal."

The following linked reference discusses the use of mindfulness practice as therapy for addiction (see also the associated image):

Extract: "The following paper is a summary of the main results and most striking insights derived from the case study entitled "Craving, Addictedness and In-Depth Systemics". The case study is based on the Therapy Centre for Drug Addicts START AGAIN in Maennedorf and Zurich, Switzerland. It was carried out for the Swiss Federal Office of Justice which con¬sid¬ered this centre to be an "innovative trial model" due to its specific logic of interven¬tion. Between summer 1995 and autumn 1998 the study was funded by the Swiss Fed¬eral Department of Justice and Police. The complete final report is approximately 400 pages long."

As a follow-up to my last post, the first three links lead to a documentation & discussion on Pope Francis's Encyclical on climate change, socio-economic systems and morality. This document supports my premise that not only must governments, but also private industry & individuals must work to adapt our socio-economic system to the new reality of the Anthropocene

Extract: “The earth is the Lord’s” (Psalms 24:1), to Him belongs “the earth and everything in it” (Deuteronomy 10:14). Thus, God denies any pretense of absolute ownership: “The land shall not be sold in perpetuity, for the land is mine; with me you are but aliens and tenants.” (Leviticus 25:23).

The next series of links lead to discussion that the Dalai Lama (and other groups following Buddha's teachings) support Pope Francis's line of thinking expressed in the recent Encyclical.

http://www.ecobuddhism.org/science/c_e/fhtaExtract: "The so-called conquest of nature overwhelms us with the natural fact of over-population and makes our troubles more or less unmanageable, because of our psychological incapacity to reach the necessary political agreements. It remains quite natural for men to quarrel and fight and struggle for superiority over one another. Where indeed have we "conquered nature"?- C.G. Jung Our planet is 4.5 billion years old. Its evolution has proved resilient enough to outlast enormously destructive crises that ended whole geological eras. We cannot, of course assume the same will be true for the biosphere we inherited in the current geological period.

Our human evolutionary lineage diverged from the chimpanzees line some six million years ago. The anatomically modern human species to which we belong is about 250,000 years old. These time-frames provide the back-story to all the scientific reports that now call for urgent protection of the planet's life support systems. For although the planet may be able (on a timescale of a hundred million years) to do without us, there is no evidence that we can do without a very particular set of ecological conditions on Earth. The main determinant of those conditions is climate.

Regional climates on Earth vary greatly from its equator to its poles, because the parallel rays of the Sun fall unevenly across the curve of the planet's surface. Global average climate is the average of all those climatic regions. It showed little variation during the last 12,000 years, the Holocene epoch that gave rise to human civilization. The preceding 140,000 years of extreme climate variation did not allow humans to settle anywhere for long. We could lived only as hunter-gatherers.

Stable global climate permitted us to develop agriculture. With this came towns and cities and advanced cultures. By 2500 years ago, extraordinary developments had become possible. In Greece, masterpieces of classical art and philosophy appeared alongside the first democracy. In India, the prince Siddhartha Gautama renounced a kingdom to accomplish comprehensive self-realization.The stable climate of the Holocene was maintained by a self-regulating atmosphere that finely balanced the concentrations of greenhouse gases (particularly carbon dioxide). These gases partially block the radiation of solar heat, from the planet back into space. Prior to the Industrial Revolution of the 19th century, the concentration of carbon dioxide in the Holocene atmosphere was 280 ppm (parts per million). Now it is 400 ppm."

http://www.ecobuddhism.org/science/popconres/twin_elephantsExtract: "…. in the past two centuries, population growth and expanding per capita consumption have contributed roughly equally to humanity's assault on its life-support systems. Reducing the assault and transitioning to a sustainable society will require action on both factors. It will take much longer to humanely reduce population size than to alter human consumption patterns. Many decades of moderately reduced fertility would be required to have a significant effect on human numbers, but with enough incentive, consumption patterns can be transformed very rapidly, as World War II mobilizations showed dramatically. Because of that time difference, moving toward population reduction now in the U.S. and globally is required if humanity is to attain a sustainable civilization. But if overconsumption by the rich continues to escalate, the benefits of ending population growth will be compromised. And if underconsumption by the poor is allowed to continue or worsen, then the cooperation we need to resolve the human predicament is unlikely to materialize. Equity issues and environmental issues are also conjoined twins.

Sustainability, from The Population Bomb to The Dominant Animal

Paul Ehrlich, Professor of Population Studies at Stanford University, is the author of The Population Bomb (1984) and co-author with Anne Ehrlich of The Dominant Animal (2010). He is a fellow of the U.S. National Academy of Sciences, American Academy of Arts and Sciences and American Philosophical Society. His research centres on endangered species, cultural evolution, environmental ethics, and the preservation of genetic resources.

Ehrlich has described how 3 major factors determine sustainability. Human Impact (I) on the environment is the product of Population (P), Affluence (A), & Technology (T); I = P × A × T . In the following talk, delivered with his trademark dynamism, he discusses how the sustainability issue has changed over the last 30 years."

Thanks for the links on the Bronze Age migrations. I studied Indo-European languages and lingustics in grad school and still try to stay somewhat abreast of developments in that field.

Logged

"A force de chercher de bonnes raisons, on en trouve; on les dit; et après on y tient, non pas tant parce qu'elles sont bonnes que pour ne pas se démentir." Choderlos de Laclos "You struggle to come up with some valid reasons, then cling to them, not because they're good, but just to not back down."

In summary (for my preceding posts in this thread), while it is impossible to precisely project (see the "Modeling the Anthropocene" & the "Anthropogenic Existential Risk" threads) how the human socio-economic system will respond to the many different pressures/threats of the Anthropocene; nevertheless, I believe that working with a better understanding of human nature, information theory, AI and computer interface technology; it is possible for governments, NGOs, private industry and individuals to effectively work together to reduce the fragility of our current socio-economic system in order to better adapt to the Anthropocene using a Darwinian "Natural Selection" frame of mind.

However, first we need to stop demanding that academics provide decision makers with bullet-proof predictions before we take appropriate adaptive measures; because as Nasim Nicholas Taleb said in 2007: "Don't ask the barber if you need a haircut – and don't ask an academic if what he does is relevant.".

While I personally love to see the findings of basic research; we all need to recognize that in our current international socioeconomic system, basic researchers are not (and never will be) held accountable for applying new findings to solve real world problems in an appropriate timeframe. In the world that we all live-in this responsibility falls to primarily on the private sector (and their lobbyists) as checked/regulated by NGO's, government regulations, press coverage and judicial review. However, currently our private sector is driven by Homo Economicus thinking, which, results in a very fragile system; as indeed, during socio-economic intercourse, lust for power is lewd while love of the common good is lyrical.

The linked article indicates that currently business leaders are not adequately prepared to adequately meet even the climate change challenge at hand (let alone all of the other Anthropogenic Existential threats):

Extract: "Will they, won’t they? Companies gathered for last week’s business and climate summit agonised over whether governments in Paris later this year will deliver the commitments to cut emissions to avoid 2C of warming, and whether the policies to pursue targets will be agreed jointly with business.But is business over-thinking, even on its own terms? Confusion and obsession about the environment and about climate change … creates opportunity for the investor – this is hard-core capitalist advice from James Altucher and Douglas R. Sease, authors of the audaciously titled Guide to Investing in the Apocalypse published by the Wall Street Journal.Money is going to be spent, they point out, and so, in their eyes, there is money to be made. In fact, across a range of scenarios which read almost like a field manual for the “shock doctrine” described by Naomi Klein, the message is that if you can’t make money by seeing opportunity where others see peril, you’re probably in the wrong game.What does this tell us about how business sees itself in the struggle to tackle climate change? That maybe there’s a few diehard exceptions who will obstruct you, but at worst you’ll encounter amoral opportunists determined to make money whatever happens. Apart from that there are dynamic entrepreneurs out in front not waiting for government and frustrated business leaders, stuck and waiting for an official green flag.Concerns about commitments, carbon pricing and adequate resources are genuine, though not the unique preserve of the business community, but in these versions there is barely any conception that business itself might still be part of the problem.Business may want a clear signal from government, but the signal from bellwether businesses with a major climate impact is anything but clear.…Everyone wants to hear positive messages about win-wins for business and the environment, and undoubtedly there are many. Boardrooms would jump for joy if the EU’s recent commitment to quantitative easing were designed in such a way as to stimulate, directly, the low carbon economy.But there’s also a need in the business community to face some tough reality where a company’s core activity is incompatible with preserving a stable climate. Swiss billionaire Stephan Schmidheiny was an early corporate advocate of environmental action, setting up the World Business Council for Sustainable Development after the 1992 Earth summit. Among other things, his family wealth came from the very climate unfriendly manufacture of cement. I once asked him several years ago if, push come to shove, he would choose company or climate. Back then he chose survival of the company.….

The other tough reality demanding more honest business reflection is the incompatibility of further, orthodox economic growth in the OECD with the 2C target. The structure of markets relying on the shareholder model also demands that companies must grow. But the best analysis available suggests that growth in OECD countries cannot be squared with halting warming at 2C, 3C or even 4C. Where are the companies brave enough to even ask the question of what the optimal size of a company might be, after which it should grow no further, and how that company should be governed and function with regard to investors?For all the decades of discussing triple bottom line accounting for social, economic and environmental performance, the single bottom line demanded by financial markets remains a trump card. Explaining why it wouldn’t invest in wind power in the UK, Shell said it “couldn’t make the numbers work”."

In order to get our private sector on a more productive track, I recommend considering the use of more Public-Private-Partnerships, PPPs, using advise given by Aristotle to Alexander the Great before he set out on his conquest of the known world, which was that to manage such a complex/polyglot conquest he would need to win the hearts & minds of the polyglot people by demonstrating that: (a) he had the people's best interest at heart; (b) he had a plan; and (c) he had the capability to implement his plan. (Note, relevant quotes from Aristotle, include: "The roots of education are bitter, but the fruit is sweet.", "Man is by nature a political animal.", and "Piety requires us to honour truth above our friends.")

To deal with the "free-rider" (Tyranny of the Commons) problem, the international 1% will need to convince the world's polyglot population that they have all of the world's best interest at heart (much as the body believes that the mind has its best interest at heart via the mind-body relationship); and that they have a workable/dynamic plan (which may take some decades to develop, much as Ray Kurzweil plans ahead to that his schemes are mature when the relevant technology is ripe for implementation) to address the world's challenges. Next, given that I believe that our biggest challenge for taking effective action is entropic uncertainty (as in over-coming the Scandal of Philosophy when making decisions on the margin) , I believe that the 1% (most frequently working in regulated PPPs) will need to demonstrate adequate capability to control information theory, AI, robotics, and smart systems in order to manage the world's socio-economic system in much the same way that derivative algorithms/technology allows fund managers to manage portfolio risk/opportunity.

In subsequent posts, I plan to discuss various specific means by which systemic entropic uncertainty can be better managed in order to adapt our current system into a more sustainable system; however, you might also want to see past suggestions listed in the "Triage" thread found here:

Abstract: "It has been widely acknowledged that people’s beliefs and perceptions influence implementation of climate change adaptation. Regarding perception barriers, some authors keep highlighting the confused definition of adaptation and its various interpretations. Our research contributes to this area by exploring how adaptation to climate change is perceived through 83 semi-structured interviews with stakeholders (public and municipal organizations, ENGO, private sector) from Montreal and Paris. Our results demonstrate a mirror opposition in the perception of adaptation to climate change. Indeed, while several respondents interpreted adaptation as a resignation, many interviewees perceived adaptation as an opportunity. The analysis showed that adaptation referring to resignation includes the ideas of a non-action and detrimental to mitigation; an excuse for not changing; anxiety about climate change; fatalism; and human failure. Adaptation perceived as an opportunity is divided into a source of creativity; toward sustainable development; led by the emergency; and awareness and making society of its responsibilities. Our findings confirm that terminological ambiguity of the term “adaptation” has to be considered in the decision-making process, which can be influenced by the perception of stakeholders."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Abstract: "The politics of climate change is not concerned solely with rival scientific claims about global warming but also with how best to govern the climate. Despite this, categories in climate politics remain caught up in the concepts of the ‘science wars’, rarely progressing far beyond the denier/believer-dichotomy. This article aims to nudge climate politics beyond the polarized scientific debates while also counteracting the de-politicisation that comes from assuming scientific claims lead directly to certain policies. First existing typologies of climate political positions are reviewed. Diverse contributions make up an emerging field of ‘climate politology’ but these tend to reduce climate politics either to views on the science or to products of cultural world-views. Drawing on policy analysis literature, a new approach is outlined, where problem-definitions and solution-framings provide the coordinates for a two-dimensional grid. The degree to which climate change is considered a ‘wicked’ problem on the one hand, and individualist or collectivist ways of understanding political agency on the other, provide a map of climate political positions beyond ‘believers’ vs ‘deniers’."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Abstract: "This paper asks how the social sciences can engage with the idea of the Anthropocene in productive ways. In response to this question we outline an interpretative research agenda that allows critical engagement with the Anthropocene as a socially and culturally bounded object with many possible meanings and political trajectories. In order to facilitate the kind of political mobilization required to meet the complex environmental challenges of our times, we argue that the social sciences should refrain from adjusting to standardized research agendas and templates. A more urgent analytical challenge lies in exposing, challenging and extending the ontological assumptions that inform how we make sense of and respond to a rapidly changing environment. By cultivating environmental research that opens up multiple interpretations of the Anthropocene, the social sciences can help to extend the realm of the possible for environmental politics."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

In order to improve our preparations for adapting to the Anthropocene, we first need to look at anthropological trends such as the urbanization trend discussed in the Forbes article below; which points out that by 2050 there most likely will be 7.4 billion people living in cities (more than today's entire global population) and that up to 96 percent of this urbanization will occur in the developing world. This article emphasizes that the current global socio-economic system of dealing with the developing world through national level actors is incapable of dealing with this challenge adequately, and that international financing (private and public) needs to reach past the national governments in the developing world, down to the local and city officials in order to build new sustainable infrastructure to support this huge trend. Further, it is my position that just as derivative algorithms (as a simple form of AI pattern recognition) lubricated the neo-liberal economic (i.e. conservative politics) international market expansion in the 1990's (think Ronald Reagan, et. al.), I believe that more sophisticated AI (such as provided globally by Google thru the Internet & smartphones) will allow international financing to reliably reach directly down to local and city decision makers in the developing world in the coming decades to provide the needed services (infrastructure physical, commercial, etc) to support this hugh urbanization trend. For example already:(a) Smartphones allow small business owner in the developing world to video corrupt officials demanding bribes and when such videos are posted on U-tube the demand for bribes are already decreasing.(b) Smartphones are rapidly facilitating the use of micro-loans and to support micro-businesses in developing countries; which would not be possible without the Internet.(c) The Internet of things will provide smart devices that can better & remotely operate infrastructure in the third world.

http://www.forbes.com/sites/danielrunde/2015/02/24/urbanization-development-opportunity/Extract: "For the first time in history more than half the world’s population resides in cities. The world’s urban population now stands at 3.7 billion people, and this number is expected to double by 2050. The trend towards urbanization is only accelerating and 96 percent of all urbanization by 2030 will occur in the developing world. This global shift toward a more urban global population has profound implications for a wide range of issues including food, water, and energy consumption. The move towards urban concentration is a fact, and as city life becomes a reality for an ever-greater share of the world’s population, governments, companies, and civil society must recognize that they are largely unequipped to deal with city-level problems.The international system typically operates through national level actors, and the way we think about international challenges is through national capitals and national governments. For example, diplomacy is carried out in national capitals and lending from multilateral development banks is often disbursed to national governments. Given the growing importance and cross-cutting nature of development challenges in urban centers, this issue needs to move up on the global agenda and new ways of working with and through provincial and city governments must be emphasized.If cities are equipped with the right leaders, strategies, and financing, urbanization can bring about immense positive changes in the lives of billions. Cities are engines of economic growth and cultural development and can offer countless benefits to their inhabitants. UN Habitat released a report in 2011 which concluded that cities are responsible for disproportionately higher rates of economic growth when compared with rural areas: with just over 50% of total world population, cities generate more than 80% of global GDP. This effect is even more pronounced in developing countries: for example, Nairobi is home to just 9% of Kenya’s population but generates 20% of GDP."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Extract: ""The biggest risk of all that we face is that we’re addressing the wrong problem," University of Oslo sociologist Karen O’Brien told a week-long conference of climate researchers in Paris.Using more renewable energy and setting up crop insurance schemes and early warning systems is important, she said. But climate change “is more than a technical challenge”.Finding genuine solutions will have to involve “looking at who has power and how that might need to change”, she said.The rush to secure oil drilling rights in the Arctic, for instance, is painted by some analysts as the potential start of a new Cold War, as countries compete to gain access to some of the planet’s last large untapped oil deposits in pursuit of profit and energy security, she said.But it is happening despite science that shows a third of the world's already discovered oil reserves - as well as half of gas reserves and 80 percent of coal reserves - must stay in the ground to avoid runaway climate change that could see food supplies collapse, O’Brien and other experts said.Climate risks will not be tackled effectively unless such contradictions are dealt with, O’Brien said. One way to achieve that could be through people stepping up to try and change the way governments and institutions behave.“Small changes can make big differences, and individuals, especially when working together, can generate big social change,” she said.Bending political and economic power to solve climate problems will be difficult, but "we are transforming either way", O'Brien said, as a world 4 degrees Celsius warmer – the current trajectory for 2100 – would reshape life on Earth.Adapting to some of the accompanying problems, including a rise in deaths from extreme heat in South Asia, would be largely impossible, she said."

URBAN OPPORTUNITIES

Some of the biggest opportunities to put the world on a different pathway may lie in fast-growing cities, said Shobhakar Dhakal of the Asian Institute of Technology in Thailand.Already more than 70 percent of global emissions caused by energy use come from cities, according to scientists on the Intergovernmental Panel on Climate Change. By 2050, urban areas will have 2.6 billion more people, most of them in Asia and Africa, Dhakal said.If rapidly urbanizing areas can build homes close to jobs and services and make walking and public transport good options, climate-changing emissions could be reduced dramatically, he said.“Our ability to make deep cuts to global greenhouse gas emissions depends to a large extent on what kinds of cities and towns we build,” Dhakal said.Real progress on climate change and reducing vulnerability to its impacts will also require efforts to coordinate a huge range of activities, including social policy, urban planning, insurance, weather monitoring and deploying the right technologies, said Nobuo Minura, president of Japan’s Ibaraki University.Johan Rockstrom of the Stockholm Resilience Centre warned that “we as humanity are now in a position to disrupt the stability of the entire world system” by driving climate change.Many economic and government systems have been designed around a high-emission way of doing things, he said. Now, “we need a new relationship between people and the planet”."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

On the one hand I say yes, on the other hand all the Anthropocene threads are in this category now and are largely about science. It'd be a shame to spread them.

What does the topic opener think?

Neven,To answer your question, I think that this topic belongs in the Science folder as information theory is advancing so rapidly that enthropic uncertainty is shrinking so fast that this is more of a science issue than a policy issue as indicated by the following information on quantum computing & AI:

For better, or worse, the three following linked articles indicate that quantum computing technology is advancing rapidly and it works very well with AI, so the companies and governments who make the most rapid advances will reap the greatest financial rewards.

http://www.economist.com/news/science-and-technology/21654566-after-decades-languishing-laboratory-quantum-computers-are-attractingExtract: "Quantum computers are not better than classical ones at everything. They will not, for example, download web pages any faster or improve the graphics of computer games. But they would be able to handle problems of image and speech recognition, and real-time language translation. They should also be well suited to the challenges of the big-data era, neatly extracting wisdom from the screeds of messy information generated by sensors, medical records and stock markets. For the firm that makes one, riches await."

http://www.ibtimes.co.uk/ibm-watson-cto-quantum-computing-could-advance-artificial-intelligence-by-orders-magnitude-1509066Extract: "IBM is yet to announce plans to integrate a quantum computer system with Watson but the software giant recently unveiled a new superconducting chip that demonstrates a technique crucial to the development of quantum computers.The chip was a leap forward in research into quantum computers, as it was the first to integrate quantum bits – or qubits – into a two-dimensional grid. This is important for making a practical machine but there is still a long way to go before quantum computers find practical use.Nasa, Google and the CIA are among the companies and organisations also working on quantum computers, while the UK government has outlined a £270m ($420m) strategy into quantum technology growth through the UK National Quantum Technology Programme.Nasa's Quantum Artificial Intelligence Laboratory (QuAIL) is working specifically on assessing the potential quantum computers have with regards to artificial intelligence, though the agency is hazy on what exactly any machine might be used for beyond helping "address Nasa challenges"."

Also see:https://www.linkedin.com/pulse/artificial-intelligence-quantum-computing-utopia-dystopia-dk-mataiExtract: "China Teams Lead The WayIn what may be one of the first clear set of demonstrations of Artificial Intelligence on a Quantum Computer, one Chinese team of physicists has trained a quantum computer to recognise handwritten characters, the first standalone demonstration of “Quantum Artificial Intelligence” and another Chinese team using a small-scale photonic quantum computer is demonstrating that quantum computers may be able to exponentially speed up the rate at which certain machine learning tasks are performed, and in some cases, reducing the time from hundreds of thousands of years to mere seconds.Zhaokai Li et al at the University of Science and Technology of China in Hefei have demonstrated machine learning on a Quantum Computer for the first time. Their Quantum Computer can recognise handwritten characters, just as humans can do, in what Li et al are calling the first demonstration of “Quantum artificial Intelligence”.Chao-Yang Lu et al at the University of Science and Technology of China in Hefei, have also demonstrated a quantum entanglement-based machine learning method called quantum-based vector classification. This quantum-based vector classification method can be used for both supervised and unsupervised machine learning, and so could have a wide variety of applications. It's also ubiquitous in our daily lives, such as in face recognition, email filtering, and recommendation systems for online shopping.Tiny Chinese Quantum ComputerThe first Chinese team's tiny Quantum Computing machine consists of a small vat of the organic liquid carbon-13-iodotrifluroethylene, a molecule consisting of two carbon atoms attached to three fluorine atoms and one iodine atom. Crucially, one of the carbon atoms is a carbon-13 isotope.This molecule is handy because each of the three fluorine atoms and the carbon-13 atom can store a single Qubit. This works by placing the molecule in a magnetic field to align the spins of the nuclei and then flipping the spins with radio waves. Because each nucleus sits in a slightly different position in the molecule, each can be addressed by slightly different frequencies, a process known as nuclear magnetic resonance.The spins can also be made to interact with each other so that the molecule acts like a tiny logic gate when zapped by a carefully prepared sequence of radio pulses. In this way the molecule processes data. And because the spins of each nucleus can exist in a superposition of spin up and spin down states, the molecule acts like a tiny Quantum Computer.Having processed the quantum information, physicists read out the result by measuring the final states of all the atoms. Because the signal from each molecule is tiny, physicists need an entire vat of them to pick up the processed signal. In this case, an upward peak in the spectrum from the carbon-13 atom indicates the character is a 6 while a downward peak indicates a 9.“The successful classification shows the ability of our quantum machine to learn and work like an intelligent human,” say Li et al.Physicists Point to Holy GrailPhysicists led by the Nobel Laureate Richard Feynman have long claimed that Quantum Computers have the potential to dramatically outperform the most powerful conventional processors utilised in Classical Computing. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time.2^20 = 2x2x2...(20 times) = 1 million plusThe advantage comes when one of those two states represents a 1 and the other a 0, forming a Quantum bit or Qubit. In that case, a single quantum object -- an atomic nucleus for example -- can perform a calculation on two numbers at the same time. Two nuclei can handle 4 numbers, 3 nuclei 8 numbers and 20 nuclei can perform a calculation using more than a million numbers simultaneously! That’s why even a relatively modest Quantum Computer could dramatically outperform the most advanced supercomputers today. The new method takes advantage of quantum entanglement, in which two or more objects are so strongly related that paradoxical effects often arise since a measurement on one object instantaneously affects the other. Here, quantum entanglement provides a very fast way to classify vectors into one of two categories, a task that is at the core of machine learning.What Happens Next?"To calculate the distance between two large vectors with a dimension of 1021 -- or, in the language of Big Data, we can call it 1 Zettabyte (ZB) or 10^21 -- a GHz clock-rate classical computer will take about hundreds of thousands of years," Prof Lu at the University of Science and Technology of China in Hefei states. "A GHz clock-rate Quantum Computer, if we can build it in the future, with the exponential speed-up, will take only about a second to estimate the distance between these two vectors after they are entangled with the ancillary qubit.""Machine learning has been all around, and will likely play a more important role in the age of Big Data with the explosion of electronic data," Prof Lu states. "It is estimated that every year [Big Data] grows exponentially by 40%. On the other hand, we have bad news about Moore's law: If it is to continue, then in about 2020, the chip size will shrink down to the atomic level where quantum mechanics rules. Thus, the speed-up of classical computation power faces a major challenge. Today, we may still be good running machine learning and other computational tasks with our good old classical computers, but we might need to think of other ways in the long run.""We are working on controlling an increasingly large number of quantum bits for more powerful quantum machine learning," Prof Lu said. "By controlling multiple degrees of freedom of a single photon, we aim to generate 6-photon, 18-qubit entanglement in the near future. Using semiconductor quantum dots, we are trying to build a solid-state platform for approximately 20-photon entanglement in about five years. With the enhanced ability in quantum control, we will perform more complicated quantum artificial intelligence tasks.""

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

We should all realize that the linked research on monkeys & rats that proves that by interfacing through a computer that "… the brains of two or more animals (either monkeys or rats) can be networked to work together as part of a single computational system to perform motor tasks (in the case of monkeys) or simple computations (multiple rat brains)." Furthermore, the networks brains could solve problems better than a single brain could; therefore, we can all expect that sometime in the next fifteen to twenty years that human brains will be networked together through the Internet that will contain various forms of AI (Google's, Facebook's, IBM's, China's etc.); which will result in "Brainlets" of forms that are unimaginable today.

Extract: "Neuroscientists demonstrate operation of the first network of brains (Brainets) in both primates and rodents Neuroscientists at Duke University have introduced a new paradigm for brain-machine interfaces that investigates how the brains of two or more animals (either monkeys or rats) can be networked to work together as part of a single computational system to perform motor tasks (in the case of monkeys) or simple computations (multiple rat brains). These functional networks of animal brains have been named Brainets by the authors of the studies. In the two Brainet examples reported in the July 9th 2015 issue of Scientific Reports, groups of animals were able to literally merge their collective brain activity together to either control the movements of a virtual avatar arm in three dimensions to reach a target (monkey Brainet), or to perform a variety of computational operations (rat Brainet), including pattern recognition, storage and retrieval of sensory information and even weather forecasting. These latter examples suggest that animal Brainets could serve as the core of organic computers that employ a hybrid digital-analog computational architecture."

See also:http://www.cbc.ca/news/technology/monkeys-rats-form-brainet-to-move-virtual-arm-predict-weather-1.3146684Extract: "It seems three monkey brains are better than one when it comes to performing simple tasks using only the power of thought.Scientists at Duke University wired the brains of adult rhesus macaque monkeys to form a network, or "brainet," and observed them in their separate rooms as they were each given partial control over a virtual arm they could see on a screen.When the animals worked together, they were able to synchronize their brain activity to guide the arm of an avatar, allowing them to reach for a virtual ball. Their reward was a small drink of juice.One monkey acting alone could not move the arm in three dimensions, but three working together could control the 3D movements and reach the moving target.The monkeys were connected only to a computer, but not one another.However, in a second set of experiments, the team directly wired the brains of four rats together, and to a computer, to allow the animals to transmit neural brain activity to each other."

I would like to add to my last post that when human driven "Brainlets" are common via the Internet (in a couple of decades time) the more connected people who practice mindfulness techniques (see below), the better-off we will all be:

Extract: "Mindfulness-based cognitive therapy (MBCT) is a psychological therapy designed to aid in preventing the relapse of depression, specifically in individuals with Major depressive disorder (MDD). It uses traditional Cognitive behavioral therapy (CBT) methods and adds in newer psychological strategies such as mindfulness and mindfulness meditation. Cognitive methods can include educating the participant about depression. Mindfulness and mindfulness meditation, focus on becoming aware of all incoming thoughts and feelings and accepting them, but not attaching or reacting to them. Like CBT, MBCT functions on the theory that when individuals who have historically had depression become distressed, they return to automatic cognitive processes that can trigger a depressive episode. The goal of MBCT is to interrupt these automatic processes and teach the participants to focus less on reacting to incoming stimuli, and instead accepting and observing them without judgment. This mindfulness practice allows the participant to notice when automatic processes are occurring and to alter their reaction to be more of a reflection.Beyond its use in reducing depressive acuity, research additionally supports the effectiveness of mindfulness meditation upon reducing cravings for substances that people are addicted to. Addiction is known to involve the weakening of the prefrontal cortex that ordinarily allows for delaying of immediate gratification for longer term benefits by the limbic and paralimbic brain regions. Mindfulness meditation of smokers over a two-week period totaling 5 hours of meditation decreased smoking by about 60% and reduced their cravings, even for those smokers in the experiment who had no prior intentions to quit. Neuroimaging of those who practice mindfulness meditation has been shown to increase activity in the prefrontal cortex, a sign of greater self-control."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

While some reliance on technology to adapt to the Anthropocene is inevitable, the linked article indicates how US dependence on GMO's is rapidly increasing to address climate change; which, in-turn is increasing the fragility of the US food supply chain (including increases in risk to public health):

Extract: "Twenty years ago, less than 10 percent of corn and soybean acres in the United States were planted with genetically engineered seeds, the type of biotechnology now commonly known as GMOs.Farmers have rushed to adopt the engineered seeds since then, in part because of climate change concerns.As a result, about 90 percent of all corn and soybean acres were GMOs last year, equivalent to an area larger than California.…“Today, no single issue will impact the world and the success of our farmer customers, our partners and our company more than sustainability, which includes our ability to adapt to and mitigate the impacts of climate change,” Grant said.Monsanto’s inventory of seeds has dwarfed its top competitors, which include businesses such as Syngenta and Pioneer, along with research institutions such as the University of Florida and Cornell.…“When it comes to GMOs, the more we’ve looked into them throughout the years, the more concerned we’ve become,” said Patty Lovera, assistant director of the progressive-leaning consumer advocacy group Food and Water Watch."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

ABSTRACT: "Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human-dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE-based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions."

For my 5,000th post I feel like briefly elaborating on some of my prior comments about my ideas of how I see our current global socio-economic system possibly adapting to the Anthropocene before the end of this century.I begin by quoting philosopher Peter Singer that evolution has: "… bequeath(ed) humans with a sense of empathy – an ability to treat other people's interest as comparable to one's own. Unfortunately, by default we apply it only to a very narrow circle of friends and family. People outside that circle were treated as subhuman and can be exploited with impunity. But over history the circle has expanded … form village to the clan to the tribe to the nation to other races to other sexes … and other species."Next, I re-iterate that in his book the "Descent of Man" Charles Darwin argues that natural selection developed in man: "… the greater strength of the social or maternal instincts than that of any other instinct or motive." Darwin reasoned that social instincts such as sympathy, empathy and compassion must be mankind's strongest instincts because compassionate individuals are more successful in raising healthier offspring that can successfully adapt to the ever changing demands of evolutionary pressures.Unfortunately, Darwin's (scientific) views on human compassion found few adherents in those who provided the philosophical underpinnings of our current Western-based global socio-economic system, as illustrated by the following quotes that skeptically dismiss this compassionate way of thinking:

"A feeling of sympathy is beautiful and amiable; for it shows a charitable interest in the lot of other men … But this good-natured passion is nevertheless weak and always blind." Immanuel Kant.

"If any civilization is to survive, it is the morality of altruism that men have to reject." Ayn Rand.

"Hence a prince who wants to keep his authority must learn how not to be good, and use that knowledge, or refrain from using it, as necessity requires." Machiavelli

Unfortunately, too many captains of our modern global socio-economic system associate the feeling of pride in association with strong groups (such as fossil fuel related activities), and associate compassion (Kant's "good-natured passion) with weakness, or with weak groups that are in need of help. Again, such individuals feel totally justified in their position as Kant states (in "Observations on the Feelings of the Beautiful and Sublime"): "For it is not possible that our heart should swell for from fondness for every man's interest and should swim in sadness at every stranger's need; else the virtuous man, incessantly dissolving like Heraclitus in compassionate tears, nevertheless with all this goodheartedness would become nothing but a tender-hearted idler."

In my opinion what the fundamental challenge in reconciling Darwin's truly scientific observation that natural selection has developed in mankind: "… the greater strength of the social or maternal instincts than that of any other instinct or motive"; with the pseudo-scientific belief that survival of the fittest thinking is for winners and altruism is for losers is that:

Habituation leads to a decrease in response to a stimulus after repeated exposure; which results in a loss of gratitude for contributions to the greater good. Habituation is associated not only with cravings and aversions to stimulus but also with the need for greater stimulus from in an entropically uncertain world. Survival of the fitting type thinking substitutes habituation for true adaption to entropically changing world; while natural selection reward those who truly adapt to such an entropically changing world.

Hopefully, greater use of information theory will allow society to better reflect gratitude for the numerous individual contributions to the greater good.

See also:

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Congratulations on your 5,000th post, ASLR. Your contributions here of enormous value. Thank you for sharing these thoughts on your philosophy, with which I have a lot of sympathy. I'll watch those videos with interest. This is an important topic, more vital than ever, but on which humans seem bound to differ.

For my 5,000th post I feel like briefly elaborating on some of my prior comments about my ideas of how I see our current global socio-economic system possibly adapting to the Anthropocene before the end of this century.I begin by quoting philosopher Peter Singer that evolution has: "… bequeath(ed) humans with a sense of empathy – an ability to treat other people's interest as comparable to one's own. Unfortunately, by default we apply it only to a very narrow circle of friends and family. People outside that circle were treated as subhuman and can be exploited with impunity.

First, picture a monkey. A monkey dressed like a little pirate, if that helps you. We'll call him Slappy.

Imagine you have Slappy as a pet. Imagine a personality for him. Maybe you and he have little pirate monkey adventures and maybe even join up to fight crime. Think how sad you'd be if Slappy died.

Now, imagine you get four more monkeys. We'll call them Tito, Bubbles, Marcel and ShitTosser. Imagine personalities for each of them now. Maybe one is aggressive, one is affectionate, one is quiet, the other just throws shit all the time. But they're all your personal monkey friends.

Now imagine a hundred monkeys.

Not so easy now, is it? So how many monkeys would you have to own before you couldn't remember their names? At what point, in your mind, do your beloved pets become just a faceless sea of monkey? Even though each one is every bit the monkey Slappy was, there's a certain point where you will no longer really care if one of them dies.

So how many monkeys would it take before you stopped caring?That's not a rhetorical question. We actually know the number.

I admit to having read (mostly) only the early part of this thread, but...

Quote

https://en.wikipedia.org/wiki/Humpty_DumptyExtract: "... After [Humpty Dumpty 's] fall, and subsequent shattering, the inability to put him together again is representative of this principle, as it would be highly unlikely, though not impossible, to return him to his earlier state of lower entropy, as the entropy of an isolated system never decreases."

I grew up with an addendum to Lewis Carol's poem: But an English doctor with patience and glue Put Humpty together as good as new.

As a young child, I wondered what this magical "pa-shun-cen" glue was, maybe something like epoxy.

This child's magical thinking aside, I think society is stuck on magical non-thinking: technology will save us, but we don't even think that far.

Re monkeysphere: I take our trash, etc. to the transfer & recycling station so that there is no 'unknown' person handling my refuse (and machines handle it from there - yeah, I don't care about them so much!). Although this isn't how I operate in all of my life, I do think about apple pickers and construction workers as individuals (as I have had those jobs). Some of this may lead me to be somewhat 'happy to remove invasive species from my acre' [e.g. reclusive] instead of lounging by the community pool or joining a local politically active environmental group.

First, picture a monkey. A monkey dressed like a little pirate, if that helps you. We'll call him Slappy.

Imagine you have Slappy as a pet. Imagine a personality for him. Maybe you and he have little pirate monkey adventures and maybe even join up to fight crime. Think how sad you'd be if Slappy died.

Now, imagine you get four more monkeys. We'll call them Tito, Bubbles, Marcel and ShitTosser. Imagine personalities for each of them now. Maybe one is aggressive, one is affectionate, one is quiet, the other just throws shit all the time. But they're all your personal monkey friends.

Now imagine a hundred monkeys.

Not so easy now, is it? So how many monkeys would you have to own before you couldn't remember their names? At what point, in your mind, do your beloved pets become just a faceless sea of monkey? Even though each one is every bit the monkey Slappy was, there's a certain point where you will no longer really care if one of them dies.

So how many monkeys would it take before you stopped caring?That's not a rhetorical question. We actually know the number.

Thanks for the thoughtful input. On a human level, my approach to addressing the issue of caring (or not caring) about "Slappy" the monkey is to live in the moment; so that no matter how many monkeys there are [say 7.3 billion and counting], whichever monkey that you is dealing with at any given moment in time, you treat that monkey with love and respect, and not just as another faceless number.

If that is too esoteric for some, then at the minimum, for every commercial interaction there should be a carbon fee with a dividend to society to address the externalized cost of GHG emissions in our commodity driven modern socio-economic system.

Very best,ASLR

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Extract: "Despite the need to proactively adapt to global warming, something could be going awry with adaptation projects — planned activities that entail altering infrastructure, institutions, or economic practices to respond the impacts of climate change. Ford et al. surveyed 1,741 studies of climate change adaptation between 2006 and 2009 and reached a troubling conclusion: instead of helping the most vulnerable economic or social sectors, projects were contributing to the ones that had already received large shares of adaptation funding. Remling and Persson analysed 27 projects supported under the Adaptation Fund of the United Nations Framework Convention on Climate Change (UNFCCC) and found that none of them attempted to address inequality or remediate unequal power structures."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

The linked article discusses how robots & computers are becoming "self-aware" and that this trend will continue because it means that robots & computers: "… will build up a repertoire of abilities that will make them become ‘very useful’ to humans." (note for discussion of the AI & robotic risks see the "Anthropocenic Existential Risk" thread):

Extract: "Great, now we have given robots ‘self awareness’…But Selmer Bringsjord who led the study said it meant robots will build up a repertoire of abilities that will make them become ‘very useful’ to humans.Last year, a super-computer became the first AI to pass the Turing Test, successfully (and worryingly) convincing humans it was a 13-year-old boy."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

While I have previously mentioned that Charles Darwin emphasized human's propensity for cooperation and empathy; however, what is new to me in the following reference Marean (2015) is that homo sapiens sapiens conflict with archaic homo sapiens over resources in South Africa contributed directly to our propensity to cooperate, and that this cooperative nature lead directly to the extinction of all archaic human groups and great numbers of megafauna:

In Brief: "Of all the human species that have lived on the earth, only Homo sapiens managed to colonize the entire globe. Scientists have long puzzled over how our species alone managed to disperse so far and wide. A new hypothesis holds that two innovations unique to H. sapiens primed it for world domination: a genetically determined propensity for cooperation with unrelated individuals and advanced projectile weapons."

Extract: "Sometime after 70,000 years ago our species, Homo sapiens, left Africa to begin its inexorable spread across the globe. Other human species had established themselves in Europe and Asia, but only our H. sapiens ancestors ultimately managed to push out into all the major continents and many island chains. Theirs was no ordinary dispersal. Everywhere H. sapiens went, massive ecological changes followed. The archaic humans they encountered went extinct, as did vast numbers of animal species. It was, without a doubt, the most consequential migration event in the history of our planet."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

As I stated in my opening post, I believe that the Anthropocene is best characterized by the Information Age, in that homo sapiens sapiens represent a currently unique repository of evolved intelligence that has dominated various Earth Systems (thereby justifying creating an Anthropogenic Era) by a combination of cooperation and aggression. Furthermore, in this and following post(s), I try to re-focus on:

(a) First in this post on how, AI and cyborg technology as an example of an Information Age trend of where we are going.(b) In following posts on how evolution has shaped the human mind-body relationship as a powerful tool for dealing with complex real-world problems (and that this can serve as a model for accelerating AI development). (c) And in following posts on how "Mindfulness" techniques (focused on the mind-body relationship) are already improving the effectiveness of numerous human field from business practices to psychology to social sciences (with the understanding that social sciences primarily focus on modeling human power structures) and may improve of the adaption of ourselves, our socio-economic systems and coming AI systems to more effectively deal with the various accelerating challenges of the Anthropocene.

In my immediate prior post (Reply #36), I cited Marean (2015) to illustrate that in the Anthropocene, human evolution was heavily impacted by archaic hominins forcing (i.e. competition with), which theoretically should result in decision makers placing an increased premium on cooperation and also on learning to better manage our evolved tendency towards territorial/tribal aggression for securing key resources.

I recognize that such a thread both raises the risk, and highlights the risk, of anthropogenic bias in the frequently inductive modeling (induction is frequently required to model complex real-world problems; e.g. "Induction is the glory of science & the scandal of philosophy) for example:(a) Climate change scientists frequently show human bias when generating climate change, and associated integrated assessment, models by focusing on incomplete deductive models that typically ignore fat-tailed risk. Hopefully, policy makers will demand that the AR6 include clearer guidance on fat-tailed risks; if not perhaps they would be willing to pay for anthropological studies of the decision making of both climate change scientists and themselves.(b) "Intelligent Design" nuts frequently call-up the input of an anthropogenically modelled God to explain evolutionary results that they find non-intuitive (i.e. spooky); when in reality such nuts cannot see beyond their own ignorance, while others (such as Einstein or Darwin) can often progressively/incrementally see beyond such transient limitations. Hopefully, AI can avoid such magical thinking.(c) Crony capitalists (including some of the fossil fuel industry) could say that due to the "free rider" problem that enacting an effective "Carbon Fee & Dividend plan & associated Tariff program" is not politically feasible because there is no effective way to manage such a plan in the real world. Hopefully, even before advanced AI systems are developed, programmers will be able to write pattern recognition algorithms (along the lines of financial derivative algorithms) that could both identify and model "free rider" behavior; which would help policy makers make more confident decisions for implementing effective Carbon Fee & Dividend plans & associated Tariff programs.I realize that earlier in this thread I did provide wiki-links on "Artificial Intelligence" and on two key associated concepts of "Computational Intelligence" and of "Evolutionary Computation"; which I now provide below:

https://en.wikipedia.org/wiki/Artificial_intelligenceExtract: "Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behavior. Major AI researchers and textbooks define this field as "the study and design of intelligent agents", in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the science and engineering of making intelligent machines".AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field's long-term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialized fields such as artificial psychology.The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can be made to simulate it." This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science."

https://en.wikipedia.org/wiki/Computational_intelligenceExtract: "Computational intelligence (CI) is a set of nature-inspired computational methodologies and approaches to address complex real-world problems to which traditional approaches, i.e., first principles modeling or explicit statistical modeling, are ineffective or infeasible. Many such real-life problems are not considered to be well-posed problems mathematically, but nature provides many counterexamples of biological systems exhibiting the required function, practically. For instance, the human body has about 200 joints (degrees of freedom), but humans have little problem in executing a target movement of the hand, specified in just three Cartesian dimensions. Even if the torso were mechanically fixed, there is an excess of 7:3 parameters to be controlled for natural arm movement. Traditional models also often fail to handle uncertainty, noise and the presence of an ever-changing context. Computational Intelligence provides solutions for such and other complicated problems and inverse problems. It primarily includes artificial neural networks, evolutionary computation and fuzzy logic. In addition, CI also embraces biologically inspired algorithms such as swarm intelligence and artificial immune systems, which can be seen as a part of evolutionary computation, and includes broader fields such as image processing, data mining, and natural language processing. Furthermore other formalisms: Dempster–Shafer theory, chaos theory and many-valued logic are used in the construction of computational models.The characteristic of "intelligence" is usually attributed to humans. More recently, many products and items also claim to be "intelligent". Intelligence is directly linked to the reasoning and decision making. Fuzzy logic was introduced in 1965 as a tool to formalise and represent the reasoning process and fuzzy logic systems which are based on fuzzy logic possess many characteristics attributed to intelligence. Fuzzy logic deals effectively with uncertainty that is common for human reasoning, perception and inference and, contrary to some misconceptions, has a very formal and strict mathematical backbone ('is quite deterministic in itself yet allowing uncertainties to be effectively represented and manipulated by it', so to speak). Neural networks, introduced in 1940s (further developed in 1980s) mimic the human brain and represent a computational mechanism based on a simplified mathematical model of the perceptrons (neurons) and signals that they process. Evolutionary computation, introduced in the 1970s and more popular since the 1990s mimics the population-based sexual evolution through reproduction of generations. It also mimics genetics in so called genetic algorithms."

https://en.wikipedia.org/wiki/Evolutionary_computationExtract: "In computer science, evolutionary computation (a.k.a. evolutionary computing) is a subfield of artificial intelligence (more particularly computational intelligence) that can be defined by the type of algorithms it is concerned with. These algorithms, called evolutionary algorithms, are based on adopting Darwinian principles, hence the name. Technically they belong to the family of trial and error problem solvers and can be considered global optimization methods with a metaheuristic or stochastic optimization character, distinguished by the use of a population of candidate solutions (rather than just iterating over one point in the search space). They are mostly applied for black box problems (no derivatives known), often in the context of expensive optimization.Evolutionary computation uses iterative progress, such as growth or development in a population. This population is then selected in a guided random search using parallel processing to achieve the desired end. Such processes are often inspired by biological mechanisms of evolution.As evolution can produce highly optimised processes and networks, it has many applications in computer science."

Finally, I conclude this post with a link (and extracts) to an Scientific American article about a thought experiment about the nature of information on Earth and how human society (hopefully in the near future) could change/adapt in order to improve the use of information for the common good:Cesar A. Hidalgo (2015), "Planet Hard Drive", Sci. Am., Vol. 313, Issue 2.

In Brief: "If we define information as order and the calculate the amount of information that our planet can hold we find that Earth's hard drive is largely empty – despite billions of years of life and thousands of year of human cultural activity. This thought experiment tells us interesting things about the emergence of order in the universe. Although the universe is hostile toward order –overall, entropy always tends to increase – information grows over time. Humans are partly responsible for the growth of information on Earth, but we remain severely limited in our capacity to create order."Extract: "The final thing this thought experiment tells us is that the ability of human networks to create information is severely constrained. Forget all the talk about big data: from a cosmic perspective, we are creating a surprisingly small amount of information (even though we burn enough energy in the process to have liberated the carbon that is warming up our planet).Our information-creation capacity is limited in part because our ability to form networks of people is constrained by historical, institutional and technological factors. Language barriers, for instance, fracture our global society and limit our ability to connect humans born in distant parts of the globe. Technological forces can help reduce these barriers. The rise of air travel and long-distance communication has reduced the cost of distant interactions, allowing us to weave networks that are highly global and that increase our capacity to process information. Still, these technologies are no panacea, and our ability to process information collectively, while larger than in previous decades, remains small.So how will the growth of information on Earth evolve in the coming centuries? An optimistic view is that the globalizing forces of technology and the fall of parochial institutions, such as patriotism and religion, will help erode historical differences that continue to inspire hat among people from different linguistic, ethnic, religious and national backgrounds. Meanwhile technological changes will deliver an age of hyperconnectivity. Electronics will evolve from portable to wearable to implantable, delivering new forms of mediate social interactions.For millennia, our species' ability to create information has benefited from our ability to deposit information in our environment, whether by building stone axes or by writing epic poems. This talent has provided us with the augmentation and coordination needed for our computational capacity to increase. We are in the midst of a new revolution that has the potential to transform this dynamic and make it even more powerful. In this millennium, human and machine will merge through devices that will combine the biological computers housed between our ears and the digital machines that have emerged from our curious minds. The resulting hyperconnected society will present our species with some of the most challenging ethical problems in human history. We could lose aspects of our humanity that some of us consider essential: for example, we might cheat death. But this merger between our bodies and the information-processing machines our brains imagined might be the only way to push the growth of information forward. We were born from information, and now, increasingly, information is being born from us."

Extract: "Although most would agree that the average person is smarter than the average cat, comparing humans and machines is not as straightforward. A computer may not excel at abstract reasoning, but it can process vast amounts of data in the blink of an eye. In recent years, researchers in artificial intelligence (AI) have used this computational firepower on the scads of data accumulating online, in academic research, in financial records, and in virtually all walks of life. The algorithms they develop help machines learn from data and apply that knowledge in new situations, much like humans do. The ability of computers to extract personal information from seemingly innocuous data raises privacy concerns. Yet many AI systems indisputably improve our lives; for example, by making communication easier through machine translation, by helping diagnose illness, and by providing modern comforts, such as your smartphone acting as your personal assistant. This special issue presents a survey of the remarkable progress made in AI and outlines challenges lying ahead.Many AI systems are designed for narrow applications, such as playing chess, flying a jet, or trading stocks. AI researchers also have a grander aspiration: to create a well-rounded and thus more humanlike intelligent agent. Scaling that research peak is daunting. But triumphs in the field of AI are bringing to the fore questions that, until recently, seemed better left to science fiction than to science: How will we ensure that the rise of the machines is entirely under human control? And what will the world be like if truly intelligent computers come to coexist with humankind?"

Abstract: "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing."

Abstract: "Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today’s researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area."

Abstract: "The field of artificial intelligence (AI) strives to build rational agents capable of perceiving the world around them and taking actions to advance specified goals. Put another way, AI researchers aim to construct a synthetic homo economicus, the mythical perfectly rational agent of neoclassical economics. We review progress toward creating this new species of machine, machina economicus, and discuss some challenges in designing AIs that can reason effectively in economic contexts. Supposing that AI succeeds in this quest, or at least comes close enough that it is useful to think about AIs in rationalistic terms, we ask how to design the rules of interaction in multi-agent systems that come to represent an economy of AIs. Theories of normative design from economics may prove more relevant for artificial agents than human agents, with AIs that better respect idealized assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people."

Abstract: "After growing up together, and mostly growing apart in the second half of the 20th century, the fields of artificial intelligence (AI), cognitive science, and neuroscience are reconverging on a shared view of the computational foundations of intelligence that promotes valuable cross-disciplinary exchanges on questions, methods, and results. We chart advances over the past several decades that address challenges of perception and action under uncertainty through the lens of computation. Advances include the development of representations and inferential procedures for large-scale probabilistic inference and machinery for enabling reflection and decisions about tradeoffs in effort, precision, and timeliness of computations. These tools are deployed toward the goal of computational rationality: identifying decisions with highest expected utility, while taking into consideration the costs of computation in complex real-world problems in which most relevant calculations can only be approximated. We highlight key concepts with examples that show the potential for interchange between computer science, cognitive science, and neuroscience."

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Before, I post more about human adaption and "mindfulness" for expanding individual horizons; I provide the following extracts about Paul Mason's vision for a utopian postcapitalistic global socio-economic system. In the following linked article Mason makes it clear that Marx has been mis-understood for over a century (see the last link to an open access copy of: "The Fragment on Machines" by Karl Marx); in much the same way that Charles Darwin has frequently been misinterpreted:

Extract: "Capitalism, it turns out, will not be abolished by forced-march techniques. It will be abolished by creating something more dynamic that exists, at first, almost unseen within the old system, but which will break through, reshaping the economy around new values and behaviours. I call this postcapitalism.As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.…New forms of ownership, new forms of lending, new legal contracts: a whole business subculture has emerged over the past 10 years, which the media has dubbed the “sharing economy”. Buzzwords such as the “commons” and “peer-production” are thrown around, but few have bothered to ask what this development means for capitalism itself.I believe it offers an escape route – but only if these micro-level projects are nurtured, promoted and protected by a fundamental change in what governments do. And this must be driven by a change in our thinking – about technology, ownership and work. So that, when we create the elements of the new system, we can say to ourselves, and to others: “This is no longer simply my survival mechanism, my bolt hole from the neoliberal world; this is a new way of living in the process of formation.”…Even now many people fail to grasp the true meaning of the word “austerity”. Austerity is not eight years of spending cuts, as in the UK, or even the social catastrophe inflicted on Greece. It means driving the wages, social wages and living standards in the west down for decades until they meet those of the middle class in China and India on the way up.…Innovation is happening but it has not, so far, triggered the fifth long upswing for capitalism that long-cycle theory would expect. The reasons lie in the specific nature of information technology....We’re surrounded not just by intelligent machines but by a new layer of reality centred on information.…There is, alongside the world of monopolised information and surveillance created by corporations and governments, a different dynamic growing up around information: information as a social good, free at the point of use, incapable of being owned or exploited or priced. I’ve surveyed the attempts by economists and business gurus to build a framework to understand the dynamics of an economy based on abundant, socially-held information. But it was actually imagined by one 19th-century economist in the era of the telegraph and the steam engine. His name? Karl Marx.…In the “Fragment” Marx imagines an economy in which the main role of machines is to produce, and the main role of people is to supervise them. He was clear that, in such an economy, the main productive force would be information. The productive power of such machines as the automated cotton-spinning machine, the telegraph and the steam locomotive did not depend on the amount of labour it took to produce them but on the state of social knowledge. Organisation and knowledge, in other words, made a bigger contribution to productive power than the work of making and running the machines.Given what Marxism was to become – a theory of exploitation based on the theft of labour time – this is a revolutionary statement. It suggests that, once knowledge becomes a productive force in its own right, outweighing the actual labour spent creating a machine, the big question becomes not one of “wages versus profits” but who controls what Marx called the “power of knowledge”.…In these musings, not published until the mid-20th century, Marx imagined information coming to be stored and shared in something called a “general intellect” – which was the mind of everybody on Earth connected by social knowledge, in which every upgrade benefits everybody. In short, he had imagined something close to the information economy in which we live. And, he wrote, its existence would “blow capitalism sky high”....With the terrain changed, the old path beyond capitalism imagined by the left of the 20th century is lost.But a different path has opened up. Collaborative production, using network technology to produce goods and services that only work when they are free, or shared, defines the route beyond the market system. It will need the state to create the framework – just as it created the framework for factory labour, sound currencies and free trade in the early 19th century. The postcapitalist sector is likely to coexist with the market sector for decades, but major change is happening.…By creating millions of networked people, financially exploited but with the whole of human intelligence one thumb-swipe away, info-capitalism has created a new agent of change in history: the educated and connected human being.This will be more than just an economic transition. There are, of course, the parallel and urgent tasks of decarbonising the world and dealing with demographic and fiscal timebombs. But I’m concentrating on the economic transition triggered by information because, up to now, it has been sidelined. Peer-to-peer has become pigeonholed as a niche obsession for visionaries, while the “big boys” of leftwing economics get on with critiquing austerity.…So how do we visualise the transition ahead? The only coherent parallel we have is the replacement of feudalism by capitalism – and thanks to the work of epidemiologists, geneticists and data analysts, we know a lot more about that transition than we did 50 years ago when it was “owned” by social science. The first thing we have to recognise is: different modes of production are structured around different things. Feudalism was an economic system structured by customs and laws about “obligation”. Capitalism was structured by something purely economic: the market. We can predict, from this, that postcapitalism – whose precondition is abundance – will not simply be a modified form of a complex market society. But we can only begin to grasp at a positive vision of what it will be like.I don’t mean this as a way to avoid the question: the general economic parameters of a postcapitalist society by, for example, the year 2075, can be outlined. But if such a society is structured around human liberation, not economics, unpredictable things will begin to shape it.…The feudal model of agriculture collided, first, with environmental limits and then with a massive external shock – the Black Death. After that, there was a demographic shock: too few workers for the land, which raised their wages and made the old feudal obligation system impossible to enforce. The labour shortage also forced technological innovation. The new technologies that underpinned the rise of merchant capitalism were the ones that stimulated commerce (printing and accountancy), the creation of tradeable wealth (mining, the compass and fast ships) and productivity (mathematics and the scientific method).Present throughout the whole process was something that looks incidental to the old system – money and credit – but which was actually destined to become the basis of the new system. In feudalism, many laws and customs were actually shaped around ignoring money; credit was, in high feudalism, seen as sinful. So when money and credit burst through the boundaries to create a market system, it felt like a revolution. Then, what gave the new system its energy was the discovery of a virtually unlimited source of free wealth in the Americas.A combination of all these factors took a set of people who had been marginalised under feudalism – humanists, scientists, craftsmen, lawyers, radical preachers and bohemian playwrights such as Shakespeare – and put them at the head of a social transformation. At key moments, though tentatively at first, the state switched from hindering the change to promoting it.Today, the thing that is corroding capitalism, barely rationalised by mainstream economics, is information. Most laws concerning information define the right of corporations to hoard it and the right of states to access it, irrespective of the human rights of citizens. The equivalent of the printing press and the scientific method is information technology and its spillover into all other technologies, from genetics to healthcare to agriculture to the movies, where it is quickly reducing costs.The modern equivalent of the long stagnation of late feudalism is the stalled take-off of the third industrial revolution, where instead of rapidly automating work out of existence, we are reduced to creating what David Graeber calls “bullshit jobs” on low pay. And many economies are stagnating.The equivalent of the new source of free wealth? It’s not exactly wealth: it’s the “externalities” – the free stuff and wellbeing generated by networked interaction. It is the rise of non-market production, of unownable information, of peer networks and unmanaged enterprises. The internet, French economist Yann Moulier-Boutang says, is “both the ship and the ocean” when it comes to the modern equivalent of the discovery of the new world. In fact, it is the ship, the compass, the ocean and the gold.The modern day external shocks are clear: energy depletion, climate change, ageing populations and migration. They are altering the dynamics of capitalism and making it unworkable in the long term. They have not yet had the same impact as the Black Death – but as we saw in New Orleans in 2005, it does not take the bubonic plague to destroy social order and functional infrastructure in a financially complex and impoverished society.Once you understand the transition in this way, the need is not for a supercomputed Five Year Plan – but a project, the aim of which should be to expand those technologies, business models and behaviours that dissolve market forces, socialise knowledge, eradicate the need for work and push the economy towards abundance. I call it Project Zero – because its aims are a zero-carbon-energy system; the production of machines, products and services with zero marginal costs; and the reduction of necessary work time as close as possible to zero.…The main contradiction today is between the possibility of free, abundant goods and information; and a system of monopolies, banks and governments trying to keep things private, scarce and commercial. Everything comes down to the struggle between the network and the hierarchy: between old forms of society moulded around capitalism and new forms of society that prefigure what comes next.…We need more than just a bunch of utopian dreams and small-scale horizontal projects. We need a project based on reason, evidence and testable designs, that cuts with the grain of history and is sustainable by the planet. And we need to get on with it.• Postcapitalism is published by Allen Lane on 30 July."

For an open access copy of: "The Fragment on Machines" by Karl Marx see:

Before providing more posts about the human side of adapting to the Anthropocene, I provide the two following linked articles that elaborate on how quickly the machine/AI/information theory side of this issue is advancing. The first link talks about advances in the use of light for both quantum computing and communication; and the second link cites how AI systems are already better than humans at identifying sketches:

Extract: "Two limits on current computers are energy and communication. The power consumption issue comes from the fact that the amount of energy used by existing circuit technology doesn't shrink in the same proportion as shrinking physical dimensions. In addition, the speed at which computers can send information is also constrained by the limits of physics.Communication and sending information is crucial. Compute nodes have to be able to talk to each other, and supercomputers simply can't be scaled up indefinitely without taking communication into consideration. New algorithms can partly relieve the issue but eventually, you're going to hit a barrier.And that's exactly where optics come in. Optical circuits, in addition to quantum computing, could speed up communication; conventionally, photons are used only to deliver information, racing along fiber-optic cables. Increasing these light-based components could, in theory, help speed up communications."

Extract: "Net, the program is capable of correctly identifying the subject of sketches 74.9 per cent of the time compared to humans that only managed a success rate of 73.1 per cent. As sketching becomes more relevant with the increase in the use of touchscreens, the development could provide a foundation for new ways to interact with computers."

In the first linked reference (& linked YouTube video)the social evolutionist Nicholas Humphrey proposes that human consciousness be considered as an illusion of a brain which he chooses to call brain art. Furthermore, he proposes that "… the evolutionary function of brain art is nothing less than to induce you to fall in love with yourself …"; which he asserts is indicated by Charles Darwin's theory of natural selection. As I believe that human consciousness has many different forms, I believe that Humphrey's proposal best matches consciousness associated with one's ego; where one spends a lifetime painting a sensory (feelings, emotions, etc., in a mind-body content) picture of oneself in one's mind and in the minds of those around the person. In this sense, the ego is the part of oneself that takes the credit for all the hard work that one does in living a life.

Extract: "1. Debate rages as to whether the qualities of conscious experience can be explained simply as the workings of the physical brain or whether there must be an additional ingredient of a nonphysical kind.2. Considering consciousness to be an illusion—a mere trick of the physical brain—offends people, so perhaps we should think of consciousness as art instead. Whereas people resent being duped by illusions, they are proud to be art lovers.3. If our brains have evolved to create masterpieces of consciousness for our private enjoyment, scientists will want to know what the evolutionary payoff is. Perhaps the purpose of this brain art is to make people fall in love with themselves—and other people, too…Darwin suggested that one of the chief functions of human art, too, is to induce the onlooker to fall in love with the artist.Thus, an extraordinary possibility suggests itself: the evolutionary function of brain art is nothing less than to induce you to fall in love with yourself. ….French philosopher Rene Descartes famously intoned: "I think, therefore, I am." But the self that evolves around sensory consciousness is deeper and more generous: I feel, therefore I am."

Further, to Humphrey's notion that sensory consciousness is developed by placing one's ego at the center of a brilliant and perplexing work of brain art that encourages one to think of all humans as equally touched by such magic of consciousness [noting that Humphrey is pandering (e.g. to those who talk about "intelligent design") by the use of the term magic]. The Buddha would concur that there is no magic in human sensory consciousness; but that living entirely in the moment (see my Reply # 33) that sensory consciousness can be transformed into awe-inspiring, unexplainably beautiful reality/nirvana. However, there are many paradoxes in Buddhist thought such as that illustrated by Beisser (1970) where he describes the paradoxical theory of change as the change that "… occurs when one becomes what he is, not when he tries to become what he is not."

Extract: “I will call it the paradoxical theory of change, for reasons that shall become obvious. Briefly stated, it is this: that change occurs when one becomes what he is, not when he tries to become what he is not. Change does not take place through a coercive attempt by the individual or by another person to change him, but it does take place if one takes the time and effort to be what he is — to be fully invested in his current positions. By rejecting the role of change agent, we make meaningful and orderly change possible.”

Bering in mind both Humphrey's and Beisser's insights, when one (or a society) needs to adapt to the stresses continuously coming in the Anthropocene, one needs to stop painting sensory egotistical brain art of what changes one can enact but when one learns to live in the moment with oneself and those around oneself, that gratitude and appreciation will automatically result in changes leading to the common good. While this may sound like philosophical pablum, when considering programing for future AI, such insights can both allow AI to benefit from human mind-body sensory guidelines by using the Facial Action Coding System (FACS) [see Keltner (2009) below] by scanning images of humans in order to use human sensations (emotions, feeling, etc.) as a guarantor of morality in evaluating what is the common good. In this sense not only humans, but also AI, must positively feedback one's activities that promote the common good and resist one's activities that degrade the common good, as evaluated by moment to moment mind-body sensations (this will likely be facilitated in the future when both humans and AI are connected directly to each other via the Internet, forming new sensory "brainlets".

Dacher Keltner (2009), "Born to be Good", W.W. Norton & CompanyExtracts: "I have been led to the idea that emotion is the source of the meaningful life...This idea proved to have the deepest scientific promise in the hands of Charles Darwin, who believed that brief emotional expressions offer clues to the deep origins of our design, and Paul Ekman, who figured out how to bring quantifiable order to the thousands of movements of the face.…The reader may be surprised to learn that:- We are a caretaking species. The profound vulnerability of our offspring rearranged our social organization as well as our nervous system.- We are a face-to-face species. We are remarkable in our capacity to empathize, to mimic, to mirror.- Our power hierarchies differ from those of other species; power goes to the most emotionally intelligent.- We reconcile our conflicts rather than fleeing or killing; we have evolved powerful capacities to forgive.- We live in complex patterns of fragile monogamy, preferring monogamy but often showing patterns of serial monogamy.…Darwin's "Expression of the Emotions in Man and Animals" sold 9,000 copies in its first printing, becoming a best seller in its day.…The facial expression we observe today are a rich shorthand for communicating the possibility of more full-bodied actions – attack, flight, embrace.…Paul Ekman put Darwin's universality thesis to a simple empirical test.…To capture the objective subjective, Ekman and Wallace Friesen devoted seven years, without funding or promise of publication, to developing the Facial Action Coding System (FACS), an anatomically based method for identifying every visible facial muscle movement in the frame-by-frame analysis of facial expression as it occurs in the seamless flow of social interaction."

I close this post by pointing out that currently excessive dependence of feeding one ego (sensory consciousness) via social media (selfies, blogging, Facebooking, etc.) and other technological connections (including news media) can lead to increasing shallowness and banality; unless one is actively engaging in the moment in discourse with others. This point is illustrated by the following extract from a July 21 2015 interview with President Obama; where he encourages citizens to actively engaged in discussing policy issues; in order to avoid the “Balkanization” of the media and of our policy makers. This will accelerate our adaption to the Anthropocene by taking up to a postcapitalistic global system suitable for the Information Age that we live in.

Extract: "“The one thing I know as I enter my last year as president is the country is full of good and decent people and there is a sense of common purpose at the neighborhood level and at the school and in the workplace,” Mr. Obama told host Jon Stewart. “And that dissipates the further up it goes because of the money and all the filters and all the polarizing that takes place in how politics are shaped,” he continued. Part of this polarization, Obama said, is due to the changing nature of the media.“I think [the media] gets distracted by shiny objects and doesn’t always focus on the big, tough choices and decisions that have to be made. And part of that is just the changing nature of technology,” he said.The president admitted that the White House was “way too slow in trying to redesign and reengineer” the structure of its Press Office to adapt to online and social media, and lamented the “Balkanization” of the media in recent years. “You’ve got folks who are constantly looking for facts that reinforce their existing point of view as opposed to having a common conversation,” Obama said. “I think one of the things that we have to think about, not just the president but all of us, is how do we join together in a common conversation about something other than the Super Bowl.” The only way everyday citizens can prevent this polarization, the president continued, is by getting involved and contacting local representatives. He went on to “guarantee” that “if people feel strongly about making sure Iran doesn’t get a nuclear weapon without us going to war and that is expressed to Congress, then people will believe in that.”"

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

I sometimes wonder whether there is really a dividing line between science and philosophy and if so, where it is.

Greatly appreciate these thoughtful posts, ASLR. If this forum had a Like button, I expect you'd have a much better sense of how well received your posts are.

Anne,

You are very kind, but with something like twenty views per post, I have some idea how well received my posts are; nevertheless, I am posting in the Science folder because I prefer to minimize pontifications on great philosophical matters but rather to focus of the use of inductive logic to estimate where society may be going. While, to paraphrase the philosopher C. D. Broad again: "Induction is the glory of science and the scandal of philosophy."

To me science bounds uncertainty in a manner that can be tested to demonstrate an actual difference in once life; while philosophy does not adequately constrain uncertainties to the point where one knows that one is headed on the right path or not. In this sense, science creates models that are wrong but are useful; while philosophy creates models that contain some element of magical thinking, that may or may not be helpful.

In this sense, IPCC climate scientists may be engaging in magic thinking by not adequately conveying to policy makers the true extent of uncertainty (particularly upper bound uncertainty) in their models; thus making their projections less scientific and more akin to wishful/magical thinking. Similarly, modern captains of industry have created rosy social images of their prowess in creating jobs and wealth; but it is wishful thinking to believe that they can continue on with BAU behavior where they monopolize information in a crony capitalistic fashion in order to maintain their power/control; when the coming challenges of the Anthropocene will require society to break-up information monopolies in order to achieve a more sustainable & equable socio-economic system more in keeping with the Information Age that we have begun.

Again, what really matters is that the models (scientific, social, mental, etc.) can be grounded in the truth of an ever changing reality and not in some philosophical mental construct.

Very best,ASLR

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

To me science bounds uncertainty in a manner that can be tested to demonstrate an actual difference in once life; while philosophy does not adequately constrain uncertainties to the point where one knows that one is headed on the right path or not. In this sense, science creates models that are wrong but are useful; while philosophy creates models that contain some element of magical thinking, that may or may not be helpful.

To me science bounds uncertainty in a manner that can be tested to demonstrate an actual difference in once life; while philosophy does not adequately constrain uncertainties to the point where one knows that one is headed on the right path or not. In this sense, science creates models that are wrong but are useful; while philosophy creates models that contain some element of magical thinking, that may or may not be helpful.

That is not the philosophy that I studied, which was empirical.

If so then maybe science and philosophy are converging, but only when both are grounded (verifiable) with the truth of any given moment.

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

Science magazine has another special issue (like the recent one focused on AI & information theory, see Reply #38), this time on paleo issues that are relevant to human evolution, the implications of the Sixth Extinction event that we are in, and in general a revolution in the "softer" sciences that are shining a new light on what it will take to adapt to the Anthropocene (see both the overview in the first link and a key article focused on human evolution in the second link):

Abstract: "New breakthroughs in ancient DNA are causing a revolution in the study of human evolution. By sequencing ancient DNA from the fossils of human ancestors, researchers have recently discovered new types of ancient humans and revealed interbreeding between our ancestors and our archaic cousins, including Neandertals. They are exploring how that genetic legacy is shaping our health and appearance today. And now that investigators can sequence entire ancient populations, ancient DNA is revealing that humans on every continent are a complex mix of archaic and modern DNA. Ancient DNA is enabling researchers to answer questions they could not previously address. As a result, archaeologists, anthropologists, and population geneticists are now seeking collaborations with ancient DNA researchers."

« Last Edit: July 24, 2015, 05:01:07 PM by AbruptSLR »

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

To me science bounds uncertainty in a manner that can be tested to demonstrate an actual difference in once life; while philosophy does not adequately constrain uncertainties to the point where one knows that one is headed on the right path or not. In this sense, science creates models that are wrong but are useful; while philosophy creates models that contain some element of magical thinking, that may or may not be helpful.

That is not the philosophy that I studied, which was empirical.

If so then maybe science and philosophy are converging, but only when both are grounded (verifiable) with the truth of any given moment.

In Reply #43 I misquoted C. D. Broad when I wrote: "induction is the glory of science and the scandal of philosophy"; as the actual quote was: "May we venture to hope that when Bacon's next centenary is celebrated the great work which he set going will be completed; and that Inductive Reasoning, which has long been the glory of Science, will have ceased to be the scandal of Philosophy? " Broad, C.D. (1926), "The philosophy of Francis Bacon: An address delivered at Cambridge on the occasion of the Bacon tercentenary, 5 October, 1926", Cambridge: University Press, p. 67.

This makes it clear that the philosopher C.D. Broad was hoping that by 2026 that human effort (on empiricism and the scientific method) will bring science and philosophy sufficiently close together that philosophy will be able to employ inductive reasoning to provide usefully guidance regarding our currently unbalanced socio-economic systems (see the two following links to Wikipedia articles about Francis Bacon and the scientific method).

Extract: "Francis Bacon, Viscount St. Alban, Kt PC QC (/ˈbeɪkən/; 22 January 1561 – 9 April 1626), was an English philosopher, statesman, scientist, jurist, orator, essayist and author. He served both as Attorney General and Lord Chancellor of England. After his death, he remained extremely influential through his works, especially as philosophical advocate and practitioner of the scientific method during the scientific revolution.Bacon has been called the father of empiricism. His works established and popularised inductive methodologies for scientific inquiry, often called the Baconian method, or simply the scientific method. His demand for a planned procedure of investigating all things natural marked a new turn in the rhetorical and theoretical framework for science, much of which still surrounds conceptions of proper methodology today.…In 1733 Voltaire "introduced him as the "father" of the scientific method" to a French audience, an understanding which had become widespread by 1750. In the 19th century his emphasis on induction was revived and developed by William Whewell, among others. He has been reputed as the "Father of Experimental Science"."

https://en.wikipedia.org/wiki/Scientific_methodExtract: "The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry is commonly based on empirical or measurable evidence subject to specific principles of reasoning. The Oxford English Dictionary defines the scientific method as "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses." The scientific method is an ongoing process, which usually begins with observations about the natural world. Human beings are naturally inquisitive, so they often come up with questions about things they see or hear and often develop ideas (hypotheses) about why things are the way they are. The best hypotheses lead to predictions that can be tested in various ways, including making further observations about nature. In general, the strongest tests of hypotheses come from carefully controlled and replicated experiments that gather empirical data. Depending on how well the tests match the predictions, the original hypothesis may require refinement, alteration, expansion or even rejection. If a particular hypothesis becomes very well supported a general theory may be developed."

Therefore, science Bacon saw the scientific method as a philosophy, and as Broad hopes that philosophy can avoid the scandal of inductive reasoning by 2026; and as Anne believes that empirical philosophy appears to be moving significantly closer to the scientific method; I have decided that unless someone object by Monday, then I will been to include my version of philosophical posts in this thread. If some does object by Monday, then I will open a new thread in "The Rest" folder entitled "Philosophy in the Anthropocene".

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson

First, I would like to note that I have not studied philosophy, so feel free to update me if I am making illogical, or poorly conceived, points. That said, I believe that any/all philosophies should be practical, in that they should provide one (and/or the common good) with some benefit, or improvement. In this regards I note that most philosophies are associated with some theology, or other; and in this regards the philosophy that I hope to espouse has something called "swimology" because it only requires that one accept issues that one understands will practically help keep himself/herself afloat in the current and coming troubled waters of life. Again, in this regards, I am not interested in discussing philosophies that cannot be demonstrated to provide some measurable benefit.

Furthermore, I believe that to maximize the benefit (for oneself/the common good), one must work responsibly to gradually become able to apply one's swimology effectively in the ever changing world that we all live in. In this regard, progress on mental improvement is most important. One example of a technic to improve one's mind is mindfulness-based cognitive therapy (MBCT) as described in the following wiki-link (& associated extract):

Extract: "Mindfulness-based cognitive therapy (MBCT) is a psychological therapy designed to aid in preventing the relapse of depression, specifically in individuals with Major depressive disorder (MDD). It uses traditional Cognitive behavioral therapy (CBT) methods and adds in newer psychological strategies such as mindfulness and mindfulness meditation. Cognitive methods can include educating the participant about depression. Mindfulness and mindfulness meditation, focus on becoming aware of all incoming thoughts and feelings and accepting them, but not attaching or reacting to them. Like CBT, MBCT functions on the theory that when individuals who have historically had depression become distressed, they return to automatic cognitive processes that can trigger a depressive episode. The goal of MBCT is to interrupt these automatic processes and teach the participants to focus less on reacting to incoming stimuli, and instead accepting and observing them without judgment. This mindfulness practice allows the participant to notice when automatic processes are occurring and to alter their reaction to be more of a reflection.Beyond its use in reducing depressive acuity, research additionally supports the effectiveness of mindfulness meditation upon reducing cravings for substances that people are addicted to. Addiction is known to involve the weakening of the prefrontal cortex that ordinarily allows for delaying of immediate gratification for longer term benefits by the limbic and paralimbic brain regions. Mindfulness meditation of smokers over a two-week period totaling 5 hours of meditation decreased smoking by about 60% and reduced their cravings, even for those smokers in the experiment who had no prior intentions to quit. Neuroimaging of those who practice mindfulness meditation has been shown to increase activity in the prefrontal cortex, a sign of greater self-control."

Another, still deeper technic for mental improvement/purification is Vipassana-medition. As indicated in the information contained in the following Wikipedia link Vipassanā is a Pāli [the ancient Indian language used by the Buddha Siddharttha Gotama (Pali)] word meaning: "… insight into the true nature of reality, …". Furthermore, this Wikipedia link acknowledges that: "Vipassanā-meditation is an ancient practice taught by Buddhas, reintroduced by Ledi Sayadaw and Mogok Sayadaw and popularized by Mahasi Sayadaw, S. N. Goenka and the Vipassana movement, in which mindfulness of breathing and of thoughts, feelings and actions are being used to gain insight in the true nature of reality."

Extract: "Vipassanā (Pāli) or vipaśyanā … in the Buddhist tradition means insight into the true nature of reality, namely as the Three marks of existence: impermanence, suffering or unsatisfactoriness, and the realisation of non-self.Vipassanā-meditation is an ancient practice taught by Buddhas, reintroduced by Ledi Sayadaw and Mogok Sayadaw and popularized by Mahasi Sayadaw, S. N. Goenka and the Vipassana movement, in which mindfulness of breathing and of thoughts, feelings and actions are being used to gain insight in the true nature of reality. Due to the popularity of Vipassanā-meditation, the mindfulness of breathing has gained further popularity in the west as mindfulness.…Contemporary Theravada orthodoxy regards samatha as a preparation for vipassanā, pacifying the mind and strengthening the concentration in order to allow the work of insight, which leads to liberation. In contrast, the Vipassana Movement argues that insight levels can be discerned without the need for developing samatha further due to the risks of going out of course when strong samatha is developed.…Vipassanā can be cultivated by the practice that includes contemplation, introspection and observation of bodily sensations, analytic meditation and observations on life experiences like death and decomposition. The practices may differ in the modern Buddhist traditions and non-sectarian groups according to the founder but the main objective is to develop insight.…In the Vipassanā Movement, the emphasis is on the Satipatthana Sutta and the use of mindfulness to gain insight into the impermanence of the self-view."

In this thread instead of focusing on how Vipassana can be used to liberate the individual human mind (through mindfulness/Vipassana meditation), I propose to discuss how insights from Vipassana may be used in the Information Age to help transcend the coming anthropogenically induced global socio-economic collapse, in order to contribute to a more sustainable socio-economic order.

In this regards, Vipassana is an eminently practical practice in that it does not recommend that people (and their consequent socio-economic systems) follow received insights until they are ready to discover (or take responsibility for) such insights for themselves, by applying what I called swimology at the beginning of this post.

Logged

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” ― Leon C. Megginson