Pages

Saturday, July 24, 2010

Dan Siegel has created this easy to use and easy to learn meditation that can get people tuned into their essential selves - Richard Schwartz's Self - a part of us where there is no anxiety, no depression, no fear, just spacious peace. Here is some of how Self is defined by Schwartz:

I was also finding that the Self wasn't just a passive witness state. In fact, it wasn't just a state of mind, but could also be an active healing presence inside and outside people. It wasn't only available during times when, in therapy or meditation, people concentrated on separating from or witnessing their thoughts and emotions. Once a person's parts learned to trust that they didn't have to protect so much and could allow the Self to lead, some degree of Self would be present for all their decisions and interactions. Even during a crisis, when a person's emotions were running high, there would be a difference because of the presence of Self energy. Instead of being overwhelmed by and blending with their emotions, Self-led people were able to hold their center, knowing that it was just a part of them that was upset now and would eventually calm down. They became the "I" in the storm. Over the years of doing this work, it becomes easier to sense when some degree of Self is present in people and when it's not. To rephrase a joke, you get the impression that "the lights are on and someone is home." A person who is leading with the Self is easy to identify. Others describe such a person as open, confident, accepting -- as having presence. They feel immediately at ease in a Self-led person's company, as they sense that it is safe to relax and release their own Selves. Such a person often generates remarks such as, "I like him because I don't have to pretend -- I can be myself with him." From the person's eyes, voice, body language, and energy, people can tell they are with someone who is authentic, solid, and unpretentious. They are attracted by the Self-led person's lack of agenda or need for self-promotion, as well as his or her passion for life and commitment to service. Such a person doesn't need to be forced by moral or legal rules to do the right thing. He or she is naturally compassionate and motivated to improve the human condition in some way because of the awareness that we are all connected.

* * * * *

To clarify this discussion, I find it useful to differentiate between what people report while meditating -- while being reabsorbed into the ocean -- and what people are like when their Self is actively leading their everyday lives. If meditation allows immersion into a seemingly Self-less oceanic state, then the Self is a separate wave of that ocean. It is that oceanic state which seems so difficult to describe. People report feeling as if they have no boundaries, are one with the universe, and lose their identity as a separate being. This is accompanied by a sense of spaciousness in body and mind, and can be an experience of great contentment, often with moments of bliss. They often feel a pulsating energy or warmth running through their bodies and may sense a kind of light in or around them. People encounter different levels and stages as they deepen their meditative practice, which the different esoteric traditions have explored and charted. Here we are more concerned with what people are like when they bring some of that awareness, spaciousness, and energy to their daily tasks and relationships -- again, when they are a wave rather than the ocean. What qualities do they report and display when they live in the world yet hold the memory of who they really are? What are the characteristics of Self-leadership? I don't know the entire answer to that question. After twenty years of helping people toward that Self-leadership, I can describe what my clients exhibit as they have more of their Self present. As I sifted through various adjectives to capture my observations, I repeatedly came up with words that begin with the letter C. So, the eight Cs of self-leadership include: calmness, curiosity, clarity, compassion, confidence, creativity, courage, and connectedness.

In Schwartz's opinion, based on nearly 30 years of working with clients, everyone has this Self and has access to it - an average person, a depressed person, a DID person, a schizophrenic person, even a sociopath. His model of therapy is less about getting rid of unhealthy parts than it is about accessing a person's Self and using that energy to help "unburden" troubled parts.

This Self is exactly what Siegel is accessing in his meditation by moving from the spokes back to the hub (the Self).

In some ways, this is a better and healthier version of Genpo Roshi's Big Mind meditation technique, and it has some of the same roots. Big Mind seeks to separate the Self from the "parts," which is also what Schwartz does with his model - but this does it faster and safer for most people (although Siegel does not use this with unstable clients because of the risks of dissociation).

Editor's Note: This post is the first in a four-part series of essays for Scientific American by primatologist Frans de Waal on human nature, based on his ongoing research. De Waal and other researchers appear in a series of Department of Expansion videos focusing on the same topic.

How often do we see rich people march in the street shouting that they're earning too much? Or stockbrokers complaining about the "onus of the bonus"? Protesters typically are blue-collar workers yelling that their jobs shouldn't go overseas or that they should earn more. A more exotic example was the 2008 march through the capital of Swaziland by poor women who felt that the king's wives had overstepped their privileges by chartering an airplane for a shopping spree in Europe.

Fairness is viewed differently by the haves and have-nots. The underlying emotions and desires aren't half as lofty as the ideal itself. The most recognizable emotion is resentment. Look at how children react to the slightest discrepancy in the size of their pizza slice compared with their siblings'. They shout, "That's not fair!" but never in a way transcending their own desires.

An experiment with capuchin monkeys by Sarah Brosnan, of Georgia State University's CEBUS Lab, and myself illuminated this emotional basis. These monkeys will happily perform a task for cucumber slices until they see others getting grapes, which taste so much better. They become agitated, throw down their measly cucumbers, and go on strike. A perfectly fine vegetable has become unpalatable! Not all economists, philosophers and anthropologists were happy with our interpretation, because they traditionally consider the "sense of fairness" uniquely human. But by now there are many other experiments, even on dogs, that confirm our initial findings.

Obviously, things get extremely political if one claims that a desire for income equality has evolutionary backing, but it is hard to deny that the collapse of the world economy in 2008 was partly due to a massive misjudgment of human nature. We're considerably less selfish and more social than advertised.

ABOUT THE AUTHOR

Frans de Waal, PhD, is a Dutch-American primatologist known for his popular books, such as Chimpanzee Politics: Power and Sex among Apes (1982) and The Age of Empathy: Nature's Lessons for a Kinder Society (2009). He teaches at Emory University in Atlanta where he directs the Living Links Center at the Yerkes National Primate Research Center. He has been elected to the National Academy of Sciences and the Royal Netherlands Academy of Arts and Sciences.

The views expressed are those of the author and are not necessarily those of Scientific American.

What is method, within the context of the unity of method and wisdom? It is a dedicated heart of bodhichitta, based on love and compassion. It apprehends its object, enlightenment, with the intention to achieve it in order to benefit others. Compassion, as its basis, apprehends its object, the suffering of others, with the wish to remove it.

Wisdom, on the other hand, is a correct view that understands voidness--the absence of fantasized, impossible ways of existing. Even if it is aimed at the same object as method, it apprehends that object as not existing in an impossible way.

The ways wisdom and compassion each apprehend their object are not at all the same. Therefore, we need to actualize these two, as method and wisdom, first separately and then together.

Even if we speak about the mahamudra* that is method and wisdom, inseparable by nature in the ultimate tantric sense, the first stage for its realization is understanding the abiding nature of reality.

Dr. Dan Siegel and Roshi Joan Halifax kick off the Mindsight retreat by asking the question, what is awareness and why does it matter? Dan goes on by saying that if you’re interested in change, awareness is the place to start. Without awareness, things happen without choice and change occurs without intention. Dan introduces the triangle of the human experience, which consist of our awareness of relationships, body/brain and the mind. He states that in order for change to be long lasting, there needs to be synaptic change.

Roshi Joan begins today’s talk by reviewing the mornings guided meditation and how the posture can influence our mental continuum. Dan goes deeper into the triangle of human experience, and asks how it can be a triangle of well-being? He introduces the “domains of integration,” which states that when a system is integrated, it is in harmony, and when linkage is blocked, it moves to chaos or rigidity.

Excellent - cultural neuroscience is an up and coming field where we still have much to learn about the ways in which interpersonal and communal relationships are reflected in and impacted by brain function.

For many years, social scientists have attempted to explain human cultural differences by studying behavioral or attitudinal traits. But recent advances in neuroimaging techniques are now allowing researchers to look directly into the brain and to identify these differences at a cellular level.

In this podcast, we are delighted to feature Dr. Nalini Ambady of Tufts University, one of the leading scientists in the emerging field of cultural neuroscience. Be sure to join us in this fascinating podcast as we discuss what exactly defines culture from a neuroscience perspective, and what areas of the brain might be responsible for our respective cultural norms and identities.

We have to admit impermanence into our lives. It's important to live with impermanence as a frame of reference so that we can approach each moment or each day with a sense of humility about what we are able to do and what we are not able to do and relinquish control over things we cannot have control over. It is important to live as if things are as permanent as stone.

You have to invest yourself in love and concern for people, accept people's love as if that's the only thing that exists. The commitment to living as if everything is always there forever with the acceptance that nothing is going to survive.

Theory of mind is a theory insofar as the mind is not directly observable.[2] The presumption that others have a mind is termed a theory of mind because each human can only prove the existence of his or her own mind through introspection, and no one has direct access to the mind of another. It is typically assumed that others have minds by analogy with one's own, and based on the reciprocal nature of social interaction, as observed in joint attention, [3] the functional use of language,[4] and understanding of others' emotions and actions.[5] Having a theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their intentions. As originally defined, it enables one to understand that mental states can be the cause of—and thus be used to explain and predict—others’ behavior.[6] Being able to attribute mental states to others and understanding them as causes of behavior implies, in part, that one must be able to conceive of the mind as a “generator of representations”.[7][8] If a person does not have a complete theory of mind it may be a sign of cognitive or developmental impairment.

Theory of mind appears to be an innate potential ability in humans, but one requiring social and other experience over many years to bring to fruition. Different people may develop more, or less, effective theories of mind. Empathy is a related concept, meaning experientially recognizing and understanding the states of mind, including beliefs, desires and particularly emotions of others, often characterized as the ability to "put oneself into another's shoes." Theorizing in the neo-Piagetian theories of cognitive development maintains that theory of mind is a byproduct of a broader hypercognitive ability of the human mind to register, monitor, and represent its own functioning. [9]

Research on theory of mind in a number of different populations (human and animal, adults and children, normally- and atypically-developing) has grown rapidly in the almost 30 years since Premack and Woodruff's paper, "Does the chimpanzee have a theory of mind?",[10] as have the theories of theory of mind. The emerging field of social neuroscience has also begun to address this debate, by imaging humans while performing tasks demanding the understanding of an intention, belief or other mental state.

An alternative account of ToM is given within operant psychology and provides significant empirical evidence for a functional account of both perspective taking and empathy. The most developed operant approach is founded on research on derived relational responding and is subsumed within what is called, "Relational Frame Theory." According to this view empathy and perspective taking comprise a complex set of derived relational abilities based on learning to discriminate and verbally respond to ever more complex relations between self, others, place, and time, and the transformation of function through established relations. [11][12]

And now on with the video . . . . The video is probably shorter than that brief explanation.

The “Theory of Mind” is an important concept in the study of cognition, communication, and child development. ToM refers to one’s ability to attribute thoughts, feelings, and knowledge to individuals other than themselves. Dr. Robert Seyfrath is one of the leading researchers in the field of animal communication, and offers a precise, succinct account of ToM in this nice little clip…

The neurological turn in recent web criticism exploits "the obsession with anything related to the mind, brain and consciousness". Geert Lovink turns the discussion to the politics of network architecture, exploring connnections between the colonization of real-time and the rise of the national web.

"Sociality is the capacity of being several things at once."G. H. Mead

Web 2.0 has three distinguishing features: it is easy to use; it facilitates the social element; and users can upload their own content in whatever form, be it pictures, videos or text. It is all about providing users with free publishing and production platforms. The focus on how to make a profit from free user-generated content came in response to the dotcom crash. At the height of dotcom mania all attention was focused on e-commerce. Users were first and foremost potential customers. They had to be convinced to buy goods and services online. This is what was supposed to be the New Economy. In 1998 the cool cyberworld of geeks, artists, designers and small entrepreneurs got bulldozed overnight by "the suits": managers and accountants who were after the Big Money provided by banks, pension funds and venture capital. With the sudden influx of business types, hip cyberculture suffered a fatal blow and lost its avant-garde position for good. In a surprising turn of events, the hyped-up dotcom entrepreneurs left the scene equally fast when, two years later, the New Economy bubble burst. Web 2.0 cannot be understood outside of this context: as the IT sector takes over the media industry, the cult of "free" and "open" is nothing but ironic revenge on the e-commerce madness.

During the post-9/11 reconstruction period, Silicon Valley found renewed inspiration in two projects: the vital energy of the search start-up Google (which successfully managed to postpone its IPO for years), and the rapidly emerging blog scene, which gathered around self-publishing platforms such as blogger.com, Blogspot and LiveJournal. Both Google's search algorithm and Dave Winer's RSS invention (the underlying blog technology) date back to 1997-98, but managed to avoid the dotcom rage until they surfaced to form the duo-core of the Web 2.0 wave. Whereas blogging embodied the non-profit, empowering aspect of personal responses grouped around a link, Google developed techniques that enabled it to parasite on other people's content, a.k.a. "organizing the world's information". The killer app turned out to be personalized ads. What is sold is indirect information, not direct services. Google soon discovered it could profit from free information floating around on the open Internet, anything from amateur video to news sites. The spectacular rise of user-generated content has been fuelled by the IT industry, not the media sector. Profit is no longer made at the level of production, but through the control of distribution channels. Apple, Amazon, eBay and Google are the biggest winners in this game.

Let's discuss some recent Web 2.0 criticism. I leave out justified privacy concerns as addressed by danah boyd and others, in part because they have already received wide coverage. Andrew Keen's The Cult of the Amateur (2007)[1] has been regarded as one of the first critiques of the Web 2.0 belief system. "What happens", Keen asks, "when ignorance meets egoism meets bad taste meets mob rule? The monkey takes over." When everyone broadcasts, no one is listening. In this state of "digital Darwinism" only the loudest and most opinionated voices survive. What Web 2.0 does is "decimate the ranks of our cultural gatekeepers". Whereas Keen could still be read as a grumpy and jealous response of the old media class, this is no longer the case with Nicholas Carr's The Big Switch (2008),[2] in which he analyses the rise of cloud computing. For Carr this centralized infrastructure signals the end of the autonomous PC as a node within a distributed network. The last chapter, entitled "iGod", indicates a "neurological turn" in net criticism. Starting from the observation that Google's intention has always been to turn its operation into an Artificial Intelligence, "an artificial brain that is smarter than your brain" (Sergey Brin), Carr turns his attention to future of human cognition: "The medium is not only the message. The medium is the mind. It shapes what we see and how we see it." With the Internet stressing speed, we become the Web's neurons: "The more links we click, pages we view, and transactions we make, the more intelligence the Web makes, the more economic value it gains, and the more profit it throws off."

In his famous 2008 Atlantic Monthly essay "Does Google make us stupid? What does the Internet do to our brains?" Carr takes this argument a few steps further and argues that constant switching between windows and sites and frantic use of search engines will ultimately dumb us down. Is it ultimately the responsibility of individuals to monitor their Internet use so that it does not have a long-term impact on their cognition? In its extensive coverage of the ensuing debate, Wikipedia refers to Sven Birkerts' 1994 study The Gutenberg Elegies: The Fate of Reading in the Electronic Age, and the work of developmental psychologist Maryanne Wolf, who pointed out the loss of "deep reading" capacity. Internet-savvy users, she states, seem to lose the ability to read and enjoy thick novels and comprehensive monographs. Carr's next book is called The Shallows. What the Internet is Doing to Our Brains and will appear in 2010. Carr and others cleverly exploit the Anglo-American obsession with anything related to the mind, brain and consciousness – mainstream science reporting cannot get enough of it. A thorough economic (let alone Marxist) analysis of Google and the free and open complex is seriously uncool. It seems that the cultural critics will have to sing along with the Daniel Dennetts of this world (loosely gathered on edge.org) in order to communicate their concerns.

The impact on the brain is an element picked up on by the Frankfurter Allgemeine Zeitung editor and Edge member Frank Schirrmacher in his book-length essay Payback (2009).[3] Whereas Carr's take on the collapse of the white male's multi-tasking capacities had the couleur locale of a US IT-business expert a.k.a. East Coast intellectual, Schirrmacher moves the debate into the continental European context of an aging middle class driven by defensive anxiety over Islamic fundamentalism and Asian hypermodernity. Like Carr, Schirrmacher seeks evidence of a deteriorating human brain that can no longer keep up with iPhones, Twitter and Facebook on top of the already existing information flows from television, radio and the printed press. We are on permanent alert and have to submit to logic of constant availability and speed. Schirrmacher speaks of "I exhaustion". Most German bloggers responded negatively to Payback. Apart from factual mistakes, what concerned them most was Schirrmacher's implicit anti-digital cultural pessimism (something he denies) and the conflict of interest between his role as newspaper publisher and as critic of the zeitgeist. Whatever the cultural media agenda, Schirrmacher's call will be with us for quite some time. What place do we want to give digital devices and applications in our everyday life? Will the Internet overwhelm our senses and dictate our worldview? Or will we have the will and vision to master the tools?

The latest title in growing collection is virtual reality pioneer Jaron Lanier's You Are Not a Gadget (2010),[4] which asks: "What happens when we stop shaping the technology and technology starts shaping us?" Much like Andrew Keen, Lanier's defence of the individual points at the dumbing down effect of the "wisdom of the crowd". In Wikipedia, unique voices are suppressed in favour of mob rule. This also crushes creativity: Lanier asks why the past two decades have not resulted in new music styles and subcultures, and blames the strong emphasis on retro in contemporary, remix-dominated music culture. Free culture not only decimates the income of performing artists, it also discourages musicians to experiment with new sounds. The democratization of digital tools has not led to the emergence of "super-Gershwins". Instead, Lanier sees "pattern exhaustion", a phenomena in which a culture runs out of variations on traditional designs and becomes less creative: "We are not passing through a momentary lull before a storm. We have instead entered a persistent somnolence and I have come to believe that we will only escape it when we kill the hive."

Thierry Chervel of the German cultural aggregator Perlentaucher writes: "According to Schirrmacher the Internet grinds down the brain and he wants to regain control. But that is no longer possible. The revolution eats its children, fathers, and those who detest it."[5] If you do not want to go into complaint mode you end up celebrating the "end of control". The discussion will eventually have to shift to who is in charge of the Internet. The Internet and society debate should be about the politics and aesthetics of its network architecture and not be "medicalized". So instead of repeating what the brain faction proclaims, I would like to turn to trends that need equal attention. Rather than mapping the mental impact and wondering whether something can be done to tame the net's influence, or discussing over and again the fate of the news and publishing industries, let us study the emerging cultural logic (such as search). Let us dig into the knowledge production of Wikipedia, and study the political forces that operate outside of the mainstream structures. Let us look at new forms of control.

The colonization of real-time

There is a fundamental shift away from the static archive towards the "flow" and the "river". Protoblogger Dave Winer promotes it on Scripting News[6] and Nicholas Carr writes sceptical notes about it in his blog series The Real Time Chronicles.[7] We see the trend popping into metaphors like Google Wave. Twitter is the most visible symptom of this transitory tendency. Who responds to yesterday's references? History is something to get rid of. Silicon Valley is gearing up for the colonization of real-time, away from the static web "page" that still refers to the newspaper. Users no longer feel the need to store information and the "cloud" facilitates this liberating movement. If we save our files at Google or elsewhere, we can get rid of the clumsy, all-purpose PCs. Away with the ugly grey office furniture. The Web has turned into an ephemeral environment that we carry with us, on our skin. Some have even said goodbye to the very idea of "search" because it is too time-consuming an activity often with unsatisfactory outcomes. This could, potentially, be the point at which the Google empire starts to crumble – and that is why they are keen to be at the forefront of what French philosopher of speed Paul Virilio described a long time ago: these days, live television is considered too slow, as news presenters turn to Twitter for up-to-the-second information. Despite all the justified calls for "slow communication", the market is moving in the opposite direction. Soon, people may not have time to pour some file from a dusty database. Much like in finance, the media industry is exploring possibilities to maximize surplus value from the exploitation of milliseconds. But unlike hedge funds, this is a technology for all. Profits will only grow if the colonization of real-time is employed on a planetary scale.

Take Google Wave. It merges e-mail, instant messaging, wikis and social networking. Wave integrates the feeds of Facebook, Twitter etc. accounts into one real live event happening on the screen. It is a meta online tool for real-time communication. Seen from your "dashboard", Wave looks like you are sitting on the banks of a river, watching the current. It is no longer necessary to approach the PC with a question and then dive into the archive. The Internet as a whole is going real time in an attempt to come closer to the messiness, the complexities of the real-existing social world. But one step forward means two steps back in terms of design. Just look at Twitter, which resembles ascii email and SMS messages on your 2001 cell phone. To what extent is this visual effect conscious? Typo rawness html-style may not be a technical imperfection, but rather points to the unfinishedness of the Eternal Now in which we are caught. There is simply no time to enjoy slow media. Back in Tuscany mode, it is nice to lie back and listen to the offline silence, but that is reserved for quality moments.

The pacemaker of the real-time Internet is "microblogging", but we can also think of the social networking sites and their urge to pull as many real-time data out of its users as possible: "What are you doing?" Give us your self-shot. "What's on your mind?" Expose your impulses. Frantically updated blogs are part of this inclination, as are frequently updated news sites. The driving technology behind this is the constant evolution of RSS feeds, which makes it possible to get instant updates of what's happening elsewhere on the web. The proliferation of mobile phones plays a significant background role in "mobilizing" your computer, social network, video and photo camera, audio devices, and eventually also your TV. The miniaturization of hardware combined with wireless connectivity makes it possible for technology to become an invisible part of everyday life. Web 2.0 applications respond to this trend and attempt to extract value out of every situation we find ourselves in. The Machine constantly wants to know what we think, what choices we make, where we go, who we talk to.

There is no evidence that the world is becoming more virtual. The cyber-prophets were wrong here. The virtual is becoming more real. It wants to penetrate and map out our real lives and social relationships. We are no longer encouraged to act out some role, but forced to be "ourselves" (which is no less theatrical or artificial). We constantly login, create profiles in order to present our "selves" on the global market place of employment, friendship and love. We can have multiple passions but only one certified ID. Trust is the oil of global capitalism and the security state, required by both sides in any transaction or exchange. In every rite de passage, the authorities must trust us before they let both our bodies and information through. The old idea that the virtual is there to liberate you from your old self has collapsed. It is all about self-management and techno-sculpturing: how do you shape the self in real-time flow? There is no time for design, no time for doubt. System response cannot deal with ambivalence. The self that is presented here is post-cosmetic. The ideal is to become neither the Other nor the better human. Mehrmensch, not Übermensch. The polished perfect personality lacks empathy and is straight-out suspect. It is only a matter of time until super persons such as celebrities reveal their weaknesses. Becoming better implies revealing who you are. Social media invite users to "administer" their all-too-human sides beyond merely hiding or exposing controversial aspects. Our profiles remain cold and unfinished if we do not expose at least some aspects of our private lives. Otherwise we are considered robots, anonymous members of a vanishing twentieth century mass culture. In Cold Intimacies, Eva Illouz puts it this way: "It is virtually impossible to distinguish the rationalization and commodification of selfhood from the capacity of the self to shape and help itself and to engage in deliberation and communication with others."[8]

Every minute of life is converted into "work", or at least availability, by a force exerted from the outside. That is the triumph of biopolitical interpretations of informational capitalism. At the same time, we appropriate and incorporate technology into our private lives, a space of personal leisure, aiming to create a moment for ourselves. How do we balance the two? It seems an illusion to speed up and slow down simultaneously, but this is exactly how people lead their lives. We can outsource one of the two and deal with either speedy or slow tasks according to our character, skill set, and taste.

Netizens and the rise of extreme opinions

Where has the rational and balanced "netizen" gone, the well-behaved online citizen? The Internet seems to become an echo chamber for extreme opinions. Is Web 2.0 getting out of control? At first glance, the idea of the netizen is a mid-1990s response to the first wave of users that took over the Net. The netizen moderates, cools down heated debates, and above all responds in a friendly, non-repressive manner. The netizen does not represent the Law, is no authority, and acts like a personal advisor, a guide in a new universe. The netizen is thought to act in the spirit of good conduct and corporate citizenship. Users were to take social responsibility themselves – it was not a call for government regulation and was explicitly designed to keep legislators out of the Net. Until 1990, the late academic stage of the Net, it was presumed that all users knew the rules (also called netiquette) and would behave accordingly. (On Usenet there were no "netizens": everyone was a pervert.) Of course this was not always the case. When misbehaviour was noticed, the individual could be convinced to stop spamming, bullying, etc. This was no longer possible after 1995, when the Internet opened up to the general public. Because of the rapid growth of the World Wide Web, with the browsers that made it so much easier to use, the code of conduct developed over time by IT-engineers and scientists could no longer be passed on from one user to the next.

At the time, the Net was seen as a global medium that could not easily be controlled by national legislation. Perhaps there was some truth in this. Cyberspace was out of control, but in a nice and innocent way. That in a room next to the office of the Bavarian prime minister, the authorities had installed a task force to police the Bavarian part of the Internet, was an endearing and somewhat desperate image. At the time we had a good laugh about this predictably German measure.

9/11 and the dotcom crash cut the laughter short. Over a decade later, there are reams of legislation, entire governmental departments and a whole arsenal of software tools to oversee the National Web, as it is now called. Retrospectively, it is easy to dismiss the rational "netizen" approach as a libertarian Gestalt, a figure belonging to the neo-liberal age of deregulation. However the issues the netizen was invented to address have grown exponentially, not gone away. These days we would probably frame them as a part of education programs in schools and as general awareness campaigns. Identity theft is a serious business. Cyberbullying amongst children does happen and parents and teachers need to know how to identify and to respond to it. Much like the mid-1990s, we are still faced with the problem of "massification". The sheer number of users and the intensity with which people engage with the Internet is phenomenal. What perhaps has changed is that many no longer believe that the Internet community can sort out these issues itself. The Internet has penetrated society to such an extent that they have become one and the same.

In times of global recession, rising nationalism, ethnic tension and collective obsession with the Islam Question, comment cultures inside Web 2.0 become a major concern for media regulators and the police. Blogs, forums and social networking sites invite users to leave behind short messages. It is particularly young people who react impulsively to (news) events, often posting death threats to politicians and celebrities without realizing what they have just done. The professional monitoring of comments is becoming a serious business. Just to give some Dutch examples. Marokko.nl has to oversee 50 000 postings on a daily basis, and the rightwing Telegraaf news site gets 15 000 comments on its selected news items daily. Populist blogs like Geen Stijl encourage users to post extreme judgments – a tactic proven to draw attention to the site. Whereas some sites have internal policies to delete racist remarks, death threats and libellous content, others encourage their users in this direction, all in the name of free speech.

Current software enables users to leave behind short statements, often excluding the possibility of others to respond. Web 2.0 was not designed to facilitate debate. The "terror of informality" inside "walled gardens" like Facebook is increasingly becoming a problem. If the Web goes real-time, there is less time for reflection and more technology that facilitates impulsive blather. This development will only invite authorities to interfere further in online mass conversations. Will (interface) design bring a solution here? Bots play a increasing role in the automated policing of large websites. But bots merely work in the background, doing their silent jobs for the powers-that-be. How can users regain control and navigate complex threads? Should they unleash their own bots and design tools in order to regain their "personal information autonomy", as David d'Heilly once put it?

The rise of the national web

Web 2.0 can be seen as a specific Silicon Valley ideology. It also simply means a second stage of Internet development. Whereas control over content may have vanished, control within the nation state is on the rise. Due to rise of the worldwide Internet user base (now at 1.7 billion), focus has shifted from the global potential towards local, regional and national exchanges. Most conversations are no longer happening in English. A host of new technologies are geo-sensitive. The fact that 42.6 per cent of Internet users are located in Asia says it all. Only around 25 per cent of content is in English these days. Such statistical data represent the true Web 2.0. What people care about first and foremost is what happens in their immediate surroundings – and there is nothing wrong with that. This was predicted in the Nineties, it just took a while to happen. The background of the "national web" is the development of increasingly sophisticated tools to oversee the national IP range (the IP addresses allocated to a country). These technologies can be used in two directions: to block users outside the country from viewing, for example, national television online, or visiting public libraries (such as in Norway and Australia, in the case of new ABC online services). They can also prevent citizens from visiting foreign sites (mainland Chinese residents are not able to visit YouTube, Facebook, etc.). In a recent development, China is now exporting its national firewall technology to Sri Lanka, which intends to use it to block the "offensive websites" of exile Tamil Tiger groups.

The massive spread of the Internet really only happened in the past 5 to 10 years. The Obama campaign was a significant landmark in this process. Representation and participation, in this context, are outworn concepts. "Democratization" means that firms and politicians have a goal and then invite others to contribute to it. In this age of large corporations, big NGOs and governmental departments, it is all too easy to deploy Web 2.0 strategies as a part of your overall communication plan. True, open-knowledge-for-all has not arrived everywhere yet – and there is still a role to play for the Web 2.0 consultant. But Web 2.0 is certainly no longer an insider tip. A lot is already known about web demographics, usability requirements and what application to use in what context. One would not use MySpace to approach senior citizens. It is known that young people are reluctant to use Twitter – it just isn't their thing.

These are all top-down considerations. It gets more interesting if you ask the Netizen 2.0 question. How will people themselves start to utilize these tools bottom-up? Will activists start to use their own Web 2.0 tools? Remember that social networking sites did not originate in a social movement setting. They were developed as post-dotcom responses to the e-commerce wave of the late 1990s, which had no concept of what users were looking for online. Instead of being regarded merely as consumers of goods and services, Web 2.0 users are pressed to produce as much data as possible. Profiles are abstracted from so-called "user generated content" that are then sold to advertisers as direct marketing data. Users do not experience the parasitic nature of Web 2.0 immediately.

From a political point of view, the rise of national webs is an ambivalent development. In design terms it is all about localization of fonts, brands and contexts. Whereas communicating in one's own language and not having to use Latin script keyboards and domain names can be seen as liberating, and necessary for bringing on board the remaining 80 per cent of the world's population that is not yet using the Internet, the new digital enclosure also presents a direct threat to the free and open exchanges the Internet once facilitated. The Internet turns out to be neither the problem nor the solution for the global recession. As an indifferent bystander, it does not lend itself easily as a revolutionary tool. It is part of the Green New Deal, but is not driving these reforms. Increasingly, authoritarian regimes such as Iran are making tactical use of the Web in order to crack down on the opposition. Against all predictions, the Great Chinese Firewall is remarkably successful in keeping out hostile content, whilst monitoring the internal population on an unprecedented scale. It proves that power these days is not absolute but dynamic. It is all about control of the overall flow of the population. Dissidents with their own proxy servers that help to circumvent the Wall remain marginal as long as they cannot transport their "memes" into other social contexts. As the jargon says: regardless of your size or intent, it is all about governmentality, how to manage complexity. The only way to challenge this administrative approach is to organize: social change is no longer techno warfare between filters and anti-filters, but a question of "organized networks" that are able to set events in motion.

[1] Andrew Keen, The Cult of the Amateur, Nicholas Brealey Publishing: London 2007.

How does the brain encode courage? This is the question that Israeli researchers, headed by Uri Nili from the Weizmann Institutide of Science, seek to answer. Their research, published in Neuron under the title Fear Thou Not: Activity of Frontal and Temporal Circuits in Moments of Real Life Courage, identifies specific brain regions whose activity correlates with the behavioral expression of courage. Defining courage as the “performance of voluntary action opposed to that promoted by ongoing fear”, Nili and colleagues used functional MRI to look at brain activity during a behavioral task requiring the expression of courage.

The Experimental Setup and Protocol, with Illustrative behavior of a Participant (depicting position of the object relative to the participant, starting from most distant at 0)

Participants were placed in an fMRI machine, and a live corn snake was place on a trolley that extended from the end of the exam room to next to the subjects head within the scanner. The trolley could be moved either towards or away from participants heads in a step-wise manner. Participants (both those who feared snakes and those who were used to handling snakes) were given control of the location of the trolley, and repeatedly asked to choose whether to advance or retreat the snake, the overall goal being to bring the snake as close to their heads as possible.

The researchers imaged the brains of the participants as they made their choices, and identified differences between brain activity when the subjects overcame their fear (moved the snake closer), and when they succumbed to it (moved the snake away). Specifically, two brain regions were found to show activity correlating with overcoming fear, the subgenual anterior cingulate cortex (sgACC) and the right temporal pole (rTP).

Of particular interest was the activity of the sgACC, which showed positive correlation with choosing to bring the snake closer. In trials when fear was overcome, sgACC activity increased during the delay period between presentation of the snakes location and the cue to make the choice, with activity remaining elevated until the button to advance the snake was pressed. In trials where subjects succumbed to their fear, the sgACC activity declined rapidly after the snakes location was presented. The sgACC therefore appears to display on-line activation correlating with the decision to overcome or succumb to fear. Furthermore, greater sgACC activity occurred in trials where greater levels of fear were overcome. From these correlations, as well as other data described in the study, the researchers concluded that activity in the sgACC reflects the effort necessary to overcome fear.

Average percent BOLD signal change in the sgACC (left panel) and rTP (right panel) ROIs of the Fearful Non-retreaters and Fearless groups in Advance choices of the Snake condition. Black and gray heavy dotted lines represent the choice button presses of the Fearless and Fearful Non-retreaters groups, respectively.

Of note to fear conditioning aficionados, the sgACC is encompassed by the ventromedial prefrontal cortex, a region previously implicated in retrieval of inhibitory associations in studies of fear conditioning, although as the authors note, previous studies of vmPRC found no on-line role for the region during acquisition of extinction of fear conditioning, a finding potentially at odds with the on-line role for sgACC described in this study.

Nevertheless, given previous knowledge regarding the connectivity of the sgACC, as well as the activity of multiple other brain regions during the snake-movement task, Nili et al propose a model of courage whereby the sgACC inhibits the amygdala, reducing autonomic arousal and promoting subjects to choose action at odds with that prompted by their fear of snakes. The authors round out their paper by suggesting that manipulating sgACC activity may be a potential intervention for disorders involving an inability to overcome fear, and by pointing out place of their research in a field seeking to understand how the brain shirts between internal representations to select a context-specific behavioral outcome.

Rick Doblin, Ph.D., Executive Director MAPS, gives a great Google Tech Talk, on the history and future of psychedelic research - which is looking a lot brighter now that government restrictions have become less draconian.

Google Tech TalkNovember 17, 2009

ABSTRACT

Presented by Rick Doblin, Ph.D., Executive Director MAPS.

We're now in the midst of a worldwide renaissance in psychedelic research, after decades of political suppression. Scientists from around the world will present their new findings at the largest psychedelic conference to take place in the US in 17 years, on April 15-18, 2010, in San Jose, CA (http://www.maps.org/conference/ ). Even media reports, which usually mention in passing the widespread use of psychedelics by the counterculture in the 1960s, are more hopeful than alarming. In this talk, we'll review the factors which led to the backlash and the lessons to be learned, discuss how the FDA opened the door to research around the world, how the ghost of Timothy Leary was buried at Harvard, and how Burning Man struggles to respond to people who have difficult psychedelic experiences. We'll conclude by explaining how non-profit drug development, initially of MDMA-assisted psychotherapy for postraumatic stress disorder (PTSD), can transform psychedelics into FDA-approved prescription medicines and can lay the groundwork for the successful, long-term integration of psychedelics into the mainstream of medicine, religion, art, creativity, and celebration.

Rick studied with Stan Grof, M.D., and was in the first group to become certified as holotropic breathwork practitioners. His professional goal is to help develop legal contexts for the beneficial uses of psychedelics and marijuana, primarily as prescription medicines but also for personal growth for otherwise "healthy" people, and to also become a legally licensed psychedelic therapist. He resides in Boston with his wife and three children.