Artificial Telepathy

A thoughtful exploration of mind control technologies, with particular emphasis on psychotronics and V2K (voice-to-skull) weaponryA thoughtful exploration of mind control technologies, with Particular emphasis on psychotronics and V2K (voice-to-skull) weaponry

The project involves basic research needed to make possible a brain-computer interface for decoding thought and communicating it to an intended target. Applications are to situations in which it is either impossible or inappropriate to communicate using visual means or by audible speech; the long-term aim is to provide a significant advance in Army communication capabilities in such situations. Non-invasive brain-imaging technologies like electroencephalography (EEG) offer a potential way for dispersed team members to communicate their thoughts. A Soldier thinks a message to be transmitted. A system for automatic imagined speech recognition decodes EEG recordings of brain activity during the thought message. A second system infers simultaneously the intended target of the communication from EEG signals. Message and target information are then combined to communicate the message as intended.

Overview

In 1967, Dewan published a paper in Nature in which was first described a method for communicating linguistic information by brain waves measured using EEG. He trained himself and several others to modulate their brains’ alpha rhythms: to turn these rhythms on and off at will. Alpha rhythms reflect brain neuron activity, at or about a frequency of 10Hz, concerning not only whether the eyes are open but also one’s state of attention. Mental activity and attention abolish these rhythms, which are normally present in a state of mental relaxation. Dewan was able to signal letters of the alphabet using Morse code by voluntarily turning these rhythms on and off, with eyes closed. Signalling such letters, one by one, provides the words and phrases that the communicator has in mind.

In 1988, Farwell and Donchin described a second method for transmitting linguistic information. This method is based on the P300 response, again measured using EEG. The P300 is evoked when a person is presented a stimulus that matches what it is they are looking for: a target. Farwell and Donchin display to the thinker the letters of the alphabet, one by one, and eventually display the letter that he or she has in mind. The P300 potential would be evoked, for that target letter, so signalling the thinker’s desire to communicate that letter. Again, thinkers can communicate words by signalling the word’s letters one by one.

Can one use brain waves that are more directly linked to speech production to communicate linguistic information? Speech is a natural method for communicating linguistic information. Were one able to use EEG to measure directly the activity of brain speech networks, one could potentially develop an easier and faster method for communicating linguistic information using EEG. Our work on covert speech production pursues this idea. Covert speech is the technical term used to refer to the words one hears in one’s head while thinking: imagined speech. Can we use EEG to measure brain activity during covert speech production in a way that lets one communicate linguistic information in a natural and rapid way?

The work aims also to determine, from brain waves, where the linguistic information should be sent: sent in a particular direction, sent to a particular person, etc. The question is not so much how the message should be sent (e.g., wireless text messaging) but where or to whom. Work on the relationship between alpha rhythms and attention has, since Dewan’s time, revealed that the pattern of alpha rhythm activity in the two hemispheres of the brain provides information on where a person is focusing attention. For example, paying attention to an area in the left half of one’s visual field causes the alpha rhythm activity in the right hemisphere of the brain to desynchronize (and so diminish in intensity), and vice versa. These shifts in brain activity are thought to be helpful in directing more sensory and cognitive resources to the area being attended (e.g., Corbetta and Shulman, 2002). EEG can be used to measure patterns of alpha rhythms (Worden et al., 2000; Sauseng et al., 2005), to measure electric potentials that are evoked in response to a shift in attention (e.g.,Harter et al., 1989; Corbetta et al., 1993), and to measure the change in amplitude of steady-state responses that are evoked by a shift in attention among frequency-tagged visual stimuli (e.g., Srinivasan et al., 2006). We are studying alpha rhythms, evoked responses and steady-state evoked potentials measured using EEG to help develop a brain-computer interface that helps the thinker communicate to where or to whom a message should be sent.

Finally, we aim to learn more about activity in brain networks when two or more tasks are carried out simultaneously. Many studies in cognitive neuroscience involve brain-imaging measurements taken during the performance of a single task (e.g., visual detection, language processing, decision-making). Covert speech production and direction intention are likely to use differing brain resources. Can these differences be used by a brain-computer interface to infer both a communicator’s message and the recipient?

EEG and brain-computer interfaces

A tremendous amount of scentific and engineering progress has been made over the past several decades in developing brain-computer interfaces based on EEG measurements of brain network activity. One indicator of this progress is that there are now (at least) three companies which are developing EEG-based technologies for use with computer and console games: Emotiv Systems, OCZ Technology, and Neurosky. The idea is that an EEG headset, worn by the player, provides signals concerning what action the player wants to take in the game, whether the player is paying attention to the game, etc. Game software which is responsive to information provided by the EEG device guides and modifies gameplay.

Research on brain-computer interfaces has historically been motivated more by biomedical applications. For example, people who have suffered strokes or injuries to their brain, as well as those suffering from certain diseases like Lou Gehrig’s disease (ALS), may have impaired movement for part or all of their bodies while preserving a good deal of normal brain function. Can a person signal how they would like to move their arm or move a cursor on a computer screen under such circumstances? Much work has led to success here. For example, researchers at Pitt and CMU showed recently that a monkey can control movement of a prosthetic arm using brain waves measured using implanted electrodes. Electrocorticographic (ECoG) measurements in humans show parallel promise for motor control through a brain-computer interface.

Our work uses the non-invasive technologies EEG, MEG (magnetoencephalography – measurement of magnetic field fluctuations at the scalp caused by brain activity) and fMRI(functional magnetic resonance imaging). EEG measures electric field fluctuations at the surface of the scalp caused by brain activity. While it has the advantages of portability, relatively low cost and a high temporal resolution–ability to track rapid events in the brain, it has several disadvantages (Nunez and Srinivasan, 2006). First, its spatial resolution is limited to about two centimeters; electric potential changes in the brain spread diffusely as they move towards the scalp surface where measurements are made. Second, EEG is also sensitive to electric field potential changes caused by muscle. Movements of the eyes, movements of muscles beneath the scalp, etc., create large electric potential changes that can swamp signals from neurons in the brain. MEG has spatial resolution similar to EEG. MEG has the further disadvantage that it relies on very expensive equipment that can only be used in a room which which is completely shielded from external sources of electromagnetic radiation. An advantage of MEG over EEG is that it is better able to measure activity in brain cortical areas which are oriented perpendicularly to the scalp’s surface: brain cortex that lies in the folds. fMRI, finally, has very good spatial resolution (on the order of 1 millimeter) but has a poor temporal resolution. The blood oxygenation level signal relied on in fMRI measurement is sluggish. fMRI is thus useful for localizing brain activity in space but is limited in determining when that activity occurs. While our focus is on an EEG-based brain computer interface, we use MEG and fMRI to acquire further information about underlying brain activity which is not available in EEG signals.

Imagined speech production

One of the simplest possible ways to test whether EEG provides information concerning thought expressed through imagined (covert) speech is as follows. A person who wears an EEG headset is shown either the letter “y” or the letter “n” very briefly. A second or two later, the person thinks to him or herself the word “yes” or the word “no”, depending on whether the displayed letter was “y” or “n”, respectively. Do this many times while recording the EEG signals. One way to analyze the EEG data from such an experiment is to attempt toclassify the data. The aim is to use the EEG signal information alone to distinguish those times when the person was thinking “yes” from those times when the person was thinking “no”. If the EEG signals provide enough information to classify accurately the “yes” and the “no” thoughts, then one has made good progress.

However, one cannot rest satisfied with such a result. For example, it may be the case that one can tell apart “yes” from “no” using EEG because the EEG signals in response to the displayed letters “y” and “n” differ. This would mean that the visual responses to the letters used to cue the thinker, rather than the covertly spoken words, lead to discernible differences in the EEG recordings. One wonders more generally how a classification result depends on the prompt: for example, a seen “y” vs. a heard “y”. It could be also the case that a particular person, while remaining silent, just happens to move his or her vocal tract muscles when thinking “yes” but not when thinking “no”. This would mean that the EEG-based differences between thinking “yes” and “no” depend on the degree of motor response. Indeed, there are many ways in which a straightforward interpretation of classifiability can prove false, and a major aim of our work is to conduct experiments that pin down more precisely what brain networks are contributing to EEG classification results.

Theta-band activity depends on the rhythm of covert speech production. The leftmost panel plots frequency spectra of EEG brain waves from single scalp locations; theta-band activity (in the 4-7Hz range, see inset) is weak when speaking covertly the syllable /ba/ at a slow, steady rate. The middle panel shows that strong theta-band activity is elicited when thinking /ba/ in a rhythm involving three /ba/ productions for every one at left. The strong difference in theta-band activity is localized to scalp positions atop central frontal cortex. Our pilot studies have found that EEG provides information that helps to classify covertly-spoken sentences, words, syllables, meters and rhythms.

A more complex type of experiment involves training. Suppose that thinking different words leads to small differences in EEG signals. One can try to help a person generate stronger, more informative EEG signals by providing feedback during an experiment. The person uses feedback information concerning the strength of EEG-based information to produce stronger signals. Such experiments typically follow earlier classification experiments. The reason is that classification experiments provide needed information on what aspects of the EEG recordings provide information that lets one differentiate what is spoken covertly.

The strongest published results for classification using EEG concern speech that one listens to rather than speech that one produces covertly. In the late 1990s, a group at Stanford succeeded in classifying EEG signals recorded while listening to small sets of words or sentences, with mixed success (Suppes et al., 1997, 1998). MEG has also been used to classify heard speech. Numminen and Curio (1999) used MEG to study auditory cortical responses to speech and showed activity related to speech monitoring in both overt and covert production (see also Houde et al., 2002). Ahissar and colleagues (2001) recorded MEG data from natural spoken sentences presented at different time-compression ratios. Their principal component analysis of the MEG data showed that one component correlated well with envelope information and its degradation consequent to compression.

Luo and Poeppel (2007) showed that, in single trials recorded when listening to natural spoken sentences, the phase of the theta band (3-8Hz) recorded from auditory cortex plays a critical role in speech representation. MEG theta band responses relate to the syllabic structure of sentence materials, which in turn affects response temporal envelope.

Classification of heard sentences using MEG (after Luo and Poeppel, 2007). A. Skull distribution of theta-band phase representation of syllabic structure. (B) Perfect classification of a small set of sentences based on MEG theta-band responses (central diagonal). Strongest competing sentences in the classification are very similar to the correct sentence.

A neurolinguistic framework for speech production is needed to understand and pursue such results. Studies of cortical speech mechanisms suggest that, within temporal and frontal lobe cortices, there is a direct speech production pipeline that ranges from earlier, conceptually-driven word selection through later selection of corresponding articulatory motor commands (Hickok and Poeppel, 2004, 2007; Indefrey and Levelt, 2004).

Brain areas involved in speech processing (after Hickok and Poeppel, 2004). A ventral processing stream (the what pathway) maps sound and meaning. Activity in the superior temporal sulcus and the posterior inferior temporal lobe (pITL) interfaces sound-based speech representations in the superior temporal gyrus and widely-distributed conceptual representations. A dorsal processing stream (the where, how or do pathway) involves activity in cortex at the junction of parietal and temporal lobes in the sylvian fissure (Spt), which projects to frontal cortical areas that generate articulatory codes: posterior inferior frontal and dorsal premotor sites (pIF/dPM).

Much evidence concerning the speech production pipeline comes from studies using EEG techniques, which have the temporal resolution needed to discern staged processing. These studies use evoked response potentials like the N200, a go/no-go signal with magnitude a function of the neural activity required for response inhibition, to measure times at which various stimuli interfere with speech production (e.g., Schiller et al., 2003). MEG has also contributed; Salmelin and colleagues (1994) used MEG to trace the time-course of cortical activation during both overt and covert picture naming. Results suggest that syllables are basic representations in cortical speech production, and that they are generated serially from representations of syntactically-marked words and used to retrieve gestural information that drives motor articulation: concepts to words to syllables to phonemes to motor articulation. The Levelt model of speech production relates this linguistic pipeline to cortical activity (Indefrey and Levelt, 2004) localized in fMRI and PET studies using a variety of overt and covert speech production tasks. The model localizes (1) lexical selection from conceptual processes to left medial temporal gyrus and environs; (2) retrieval of a word’s phonological code some 70 msec later to (left) Wernicke’s area; (3) sequential syllabification of a word some 100 msec later to (left) Broca’s area, and (4) phonetic encoding of the syllables some 150 msec later to left inferior lateral frontal cortex and to right supplementary motor area. Can EEG be used to make sense of the rumbling of this pipeline?

Our intent is to learn what linguistic information can be extracted from EEG recordings of this direct speech production pipeline, when one thinks to oneself. We are particularly interested in the involvement of brain networks which help with speech motor articulation and with brain networks involved in generating the auditory images which accompany covert speech: the words heard in one’s head while thinking. Our expectation is that decoding the EEG recordings of a covert speech stream successfully will rely on context in a way similar to that found when performing standard automatic speech recognition (ASR). A particular element of speech that is signaled through a spoken speech waveform, be it a phoneme, syllable or word, is more reliably identified when taken in the context of preceding speech elements. We will work to adapt standard ASR to the decoding of EEG signals concerning covert speech streams.

Intended Direction

The project aims also to discern from EEG recordings an intended direction that may be signaled by a thinker to select a target of communication. Two components are of special interest: EEG signals concerning overt orienting movements, like those of the eyes, and signals concerning shifts of attention. Saccadic eye movements are overt indicators of attentional orientation that depend on a generator network spanning cortical frontal eye fields and subcortical neurons in substantial nigra, superior colliculus and the brainstem (Boucher et al., 2007). Shifts in attention may occur covertly and are thought to result from the activity of attentional circuits in frontal and parietal lobes (Corbetta and Shulman, 2002). These are thought to feed back onto visual areas in occipital cortex (Praamstra et al., 2005); such feedback is thought to promote the facilitation of sensory processing from the intended direction.

Shift in gaze is closely related to shift of attention. A premotor theory of attention (Rizzolatti et al., 1994) suggests that the allocation of spatial attention involves planning for but not executing a saccade. Yet it is possible to shift attention without shifting gaze (Hoffman and Subramanian, 1995), and some evidence suggests that spatial attention shifts may occur in the absence of saccade preparation (Juan et al., 2004).

Event-related potentials measured using EEG and which result from a shift of attention are threefold. They include EDAN, an early posterior negativity in the hemisphere contralateral to the attended hemifield, thought related to the spatial information provided by a cue in a covert orienting task (Harter et al., 1989); LDAP, a later contralateral positivity thought related to facilitation of sensory areas (Harter et al., 1989) and ADAN, an enhanced negativity at frontal contralateral electrodes likely linked to activation of frontal lobe neurons involved in the control of spatial shifts (ADAN, Corbetta et al., 1993; Nobre et al., 2000). These ERPs are supramodal, in that they occur independently of the sensory modality used to modulate attention (Eimer et al., 2002).

Alpha-band activity in frontal, parietal and occipital cortex, recorded by EEG in the 8-14 Hz range, provides further information concerning visuospatial attention that may very possibly be recovered reliably from single trials. Alpha-band amplitudes are suppressed in parieto-occipital cortex contralateral to the covertly-attended hemifield and enhanced in cortex contralateral to the to-be-ignored hemifield (Worden et al., 2000; Sauseng et al., 2005; Thut et al., 2006; Capotosto et al., 2008). Synchronization in the form of alpha-band phase-coupling increases between frontal and parieto-occipital alpha activity in the hemisphere contralateral to the attended region, which suggests that the posterior modulation of alpha activity in contralateral posterior parieto-occipital cortex is controlled by prefrontal regions (Sauseng et al., 2005).

Spatial shifts of attention that depend on auditory stimulation also give rise to event-related potentials (e.g., Teder-Salejarvi et al., 1999), modulation of alpha-band activity, and modulation of steady-state evoked potentials, which suggest that such shifts are spatial, not merely visual. Finally, shifts in attention are thought to occur not only among spatial locations but among object features like color (Muller et al., 2006) and among objects themselves.

We hypothesize that EEG recordings related to spatial attention in single trials can be used in four basic ways to provide information concerning intended direction.

One can discern the hemifield to which attention is lateralized through analysis of attentional modulation of alpha-band activity and of steady-state visual and auditory evoked potentials;

One can discern which of several frequency-tagged objects attention is directed towards through analysis of attentional modulation of steady-state visual and auditory evoked potentials

One can estimate a continuous-valued intended direction by considering information concerning shift in gaze captured through EEG and through eye-tracking, in addition to attentional modulation of alpha-band activity and of steady-state evoked potentials, and

By using gaze direction and steady-state evoked potential information one can most reliably discern the intended target of communication, as these provide information concerning both attended direction and attended object.

Potential Applications

The funded research is basic in nature. A functioning brain-computer interface for communicating thought and the intended recipient like that described above is years away. Yet one can identify several areas of future application. These include the development of a silent communications system for dispersed ground forces, of a speech-based means of communication for locked-in individuals, and of commercial communications devices based on brain-wave decoding.

As a key player in the Intelligence-Industrial Complex that spawned artificial telepathy, Booz Allen Hamilton deserves special attention. As a key player in the Intelligence-Industrial Complex That spawned artificial telepathy, Booz Allen Hamilton deserves special attention.

The central importance of BAH becomes clear when one considers some of the stunning information unearthed by reporter Tim Shorrock in his new book Spies for Hire: The Secret World of Intelligence Outsourcing (Simon & Schuster: New York, 2008). The Central Importance of BAH Becomes Clear When one considers Some of the stunning details unearthed by journalist Tim Shorrock in his new book Spies for Hire: The Secret World of Intelligence Outsourcing (Simon & Schuster: New York, 2008).

In Chapter 2, titled “Booz Allen Hamilton and the Shadow IC,” Shorrock informs us that the spy business is booming, and BAH enjoys top ranking among those contractors who serve the “intelligence community.” In Chapter 2, Titled “Booz Allen Hamilton and The Shadow IC,” Shorrock us information thats the spy business is booming, and BAH enjoy top rankings Among Those contractors serve the who the “intelligence community”.

In 2002, Booz Allen had more than 1000 former intelligence officers on its staff, and its government contracts rose dramatically after 911, In 2002, Booz Allen begged More Than 1000 forms intelligence officer on the ITS staff, ITS and government contracts rose Dramatically after 911,

“from $626,000 in 2000 to $1.6 billion in 2006. Most of the latter figure, $932 million, was with the Department of Defense, where Booz Allen’s major customers included the NSA, the Army, the Air Force, the Defense Logistics Agency, and the National Guard. In 2006, it was one of seven firms awarded a ten-year contract to bid on up to $20 billion worth of work in command, control, communications, computers, intelligence, surveillance and reconnaisance — a mouthful of a term usually referred to as C4ISR — for the Army’s Communications and Electronics Command, which is based in Fort Monmouth, New Jersey.“From $ 626,000 in 2000 to $ 6.1 Billion in 2006. Most Of The Shaped, $ 932 million, was with the Department of Defense, där Booz Allen’s Major Customers included the NSA, the Army, the Air Force, the Defense Logistics Agency, and The National Guard. In 2006, it was one of Seven Firms Awarded a ten-year contract to bid on up-to-$ 20 Billion Worth of work into command, control, communications, computers, intelligence, surveillance and reconnaisance – a mouthful of a term usually Referred to as C4ISR – for the Army’s Communications and Electronics Command, Which is based in Fort Monmouth, New Jersey.

The term C4ISR is a perfect descriptor for Artificial Telepathy — a powerful fusion of signal processing technologies that allows technicians to remotely gather and collect human intelligence (HUMINT) from the brain signals (SIGINT) of other human beings. The term C4ISR is a perfect Descriptor for Artificial telepathy – Powerful Fusion of a Signal Processing Technologies That Allows Technicians to Remotely gather and collect human intelligence (HUMINT) from the brain signals (SIGINT) of Other Human Being.As with C4ISR, Artificial Telepathy automates “signals collection” (eavesdropping) and “intelligence analysis” (figuring out what people intend to do) by utilizing satellites and computers, and the overall goal is military command and control. As with C4ISR, Artificial telepathy auto mates’ signals collection “(eavesdropping) and” intelligence analysis “(figuring out What people intend to do) by utilizing satellites and computers, & the overall goal is Military Command and Control.Indeed, one might define Artificial Telepathy as a subset of C4ISR with a special focus on neurology, psychology and mind control. Indeed, one Might define Artificial telepathy as a subset of C4ISR with a Special Focus on Neurology, psychology and mind control.Artificial Telepathy is an exotic form of C4ISR that allows warriors to communicate nonvocally with soldiers in the field, enables spies and intelligence agents to perform reconnaisance and surveillance nonlocally by means of “remote viewing,” and allows military officers to command and control the behavior of human minds at a distance, with the artificial aid of carefully networked satellite and computer technology. Artificial telepathy is an exotic form of C4ISR That Allows warriors to Communicate with nonvocally Soldiers in the Field, enable spies and intelligence agents to Perform reconnaisance and surveillance nonlocally by means of “remote viewing,” and Allows military officers to command and control the behavior of human minds at a distance, with the Aid of Artificial care Fully Networked satellite and computer technology.Booz Allen Hamilton certainly has close ties to the contractors who worked on the Pentagon’s “Stargate” program for psychic spying in the 1990s, and it took a lead role in development of the NSA’s “Total Information Awareness” projects, mentioned in earlier posts. Booz Allen Hamilton certainly hock close ties to the contractors WHO Worked On The Pentagon’s “Stargate” program for psychic spying in the 1990s, and it Took a Lead Role In Development Of The NSA’s “Total Information Awareness’ Project, mentioned in earlier posts.

BAH must certainly be on the short list of firms capable of designing and launching a system of space-based, mind-invasive weaponry. BAH must certainly be on the shortlist of Firms Capable of designing and launching a system of space–based, mind-invasive weaponry.It has all the people, pull and know-how needed to put an “electronic concentration camp system” in place. It hock all the people, pull and know-how Needed to put an “electronic concentration camp system” in place.

This collection of biographical profiles below is largely based the Wikipedia article at this link: http://en.wikipedia.org/wiki/Booz_Allen_Hamilton This collection of Biographical profiles below is largely based the Wikipedia article at this link: http://en.wikipedia.org/wiki/Booz_Allen_HamiltonPhotographs have been added (where possible) and additional materials illustrate the links between each person and artificial telepathy . Photographs Have Been Added (Where Possible) Additional materials and links the illustrata Between EACH person and artificial telepathy.In other words, any significant article that turns up from a Google search on the Booz Allen officer’s name + “Artificial Telepathy” or “psychotronics” or “mind control” is listed and hyperlinked beneath the person’s profile. In other words, Any article Significant That turns up from a Google search on the Booz Allen officer’s name + “Artificial telepathy” or “psychotronics” or “mind control” is blacklisted and hyperlinked Beneath the person’s profile.Notable colleagues and alumniNotable alumni and colleaguesJonathan S. Bush – CEO and founder of AthenaHealth, a “healthcare technology” company, Mr.Jonathan S. Bush – CEO and founder of Athena Health, a “healthcare technology” company, Mr..Bush is the first cousin of President George W. Bush and the nephew of President George HW Bush. Bush Is The First cousin of President George W. Bush & the Nephew of President George HW Bush.Jonathan’s father, also named Jonathan Bush, is the brother of George HW Bush. Jonathan’s father, “also named Jonathan Bush, brother of the ice, George HW Bush.With regard to CIA mind control programs, it must be noted that George HW Bush was briefly the director of CIA in 1975, a year when the Church Committee first exposed the role of the CIA in mind control and assassination programs. With REGARD to the CIA mind control program, it must be noted That George HW Bush was Briefly the Director of the CIA in 1975, a year When the Church Committee first Exposed The Role of the CIA mind control and the Assassination program.http://www.the7thfire.com/bush15.htmhttp://www.the7thfire.com/bush15.htmJonathan himself has no known connection to any such program. Jonathan himself has no known connection to Any Such applications.He was a founding member of Booz-Allen’s Managed Care strategy group, and presently works as a consultant to Agilence Health Advisors. He was a founding member of Booz-Allen’s Managed Care Strategy Group, and presently works as a consultant to Agilence Health Advisors.http://www.agilenceadvisors.com/team_bios.htmlhttp://www.agilenceadvisors.com/team_bios.htmlhttp://en.wikipedia.org/wiki/Jonathan_S._Bushhttp://en.wikipedia.org/wiki/Jonathan_S._Bushhttp://www.athenahealth.com/http://www.athenahealth.com/

Welcome to the PRESENCCIA project E.U

This Integrated Project will undertake a Research Programme that has as its major goal the delivery of presence in wide area distributed mixed reality environments. The environment will include a physical installation that people can visit both physically and virtually. The installation will be the embodiment of an artificial intelligent entity that understands and learns from its interaction with people. People who inhabit the installation will at any one time be physically there, virtually there but remote, or entirely virtual beings with their own goals and capabilities for interacting with one another and with embodiments of real people. Specific subclasses of the installation will be used for the construction of a number of application scenarios, such as a persistent virtual community that embodies the project itself. The core methodology will be to achieve this through the identification, understanding and exploitation of cerebral mechanisms for presence in conjunction with advances in the underlying technology for mixed reality display and interaction, with special attention to the interaction between people, and also between people and virtual people. Such cerebral mechanisms will be the basis for a core aspect of the IP which is the exploitation of brain-computer interfaces.

Processes within the environments adapt and correlate with the behaviour and state of people, and in addition people are able to effect changes within the environment through thought as well as through motor actions.

In one of the other posts, we talked about how the internet will drive a brain interface. That and medical needs will drive the research. Right now very experienced researchers are working on ways to do this but few of them have the money they need to do the work. All the money is going into new tennis rackets ( an example that is a bit dated now ). or other ways to make quick money. Business looks to the next quarter, the next year. To do this research you have to look five and ten years down the road. And nobody invests in that blue sky research.
I’m probably a year or more out of date on what is actually being done. I hope there is a lot of fundamental research that we can use, but if they patent it, that just locks it up. I hope the nanofactory animation I did contributed something to the mix that can’t be owned by some corporation. Probably not, it was too theoretical and not based on real research.

We will definitely have protesters. This is the most radical technology the human race has ever invented. NOTHING comes close. Everything will be labeled as evil by someone. Let’s just hope enough people want the results. The stem cell controversy is the exact type of problem we will see. Moral arguments will affect how the laws are written. If the government is conservative, it will be inhibited and controlled, if the government is more liberal, it will be supported. Everybody votes their values.
John

yes, really frightening stuff. I can’t believe no trails at least since they say it’s the same drug. A radio transmitter really makes it a LOT different. I don’t want a Radio transmitter in my body. It seems that we are to be watched, tracked and made sure that we do what someone else says we must. There was a hearing this Spring in the Senate on Aging discussing just how great this all would be!! They are trying to have broadband over the electric wires, pills that track, a medical ‘force’ that makes sure you take your medications etc. This is just one of the first times I saw it mainstream. We really do have to stop this!

Sounds incredible! Could prove that ‘consiousness’ (and all the spiritual beliefs we associate with this subjective experience) is just a product of our highly evolved brains. I can’t begin to imagine the influence it might have on neuroscience as well as psychological disorders. What fascinating potential….

… Being a professor of War Studies, I found, during my military research, the mind-control technique applied to the war prisoners. It is an electromagnetic mind-control technique which can take full control of the person’s body and mind permanently. It uses modulated microwave to produce audible voices in the person’s head. It is in the form of subliminal hypnotic command and the victim can be hypnotically programmed for years without knowing.

Thoughts are implanted in the victim’s mind without letting him know. In microwave hearing, nobody can hear the voices except the targeted individual. The sound reverberates in the target’s ear monotonously. In a solitary cell the high pitched sound gets amplified. Slowly it stirs the unconscious layer of the mind and deeply affects the nerves. …

Do you think that humans could advance robot technology so that one day, they could think for themselves without our control, reproduce without human assistance, and get energy from the environment and food sources rather than relying on purely electrical human-produced power?

I have a cousin studying robotics at the South China Institute of Technology and he says robots could be considered “living” so it got me thinking.1967100
Garter

It’s here, I am a victim from Spain. I have been on a persecution for 2 years. They control all television channels to harassment me. Hundred people are involved, like a mafia.
If I am looking a TV concourse and think on a ridiculous response, they repeat it, even a number or a word.
My health has been damage, and now my heart is accelerating, but I am hearing meditation Zen music.
I read a book from David Bohm about the relevance of signification, and they could’nt penetrate my mine while I was concentrated letting their words from the TV cross my soul without caring me.
I was scared, but I decided to talk because no one can live this way.
I don’t hear voices, They suggest me that are my glasses, but the other day I had like an internal focused explosion on my head, while I was driven with and unpleasant pain.
This world is the biggest lie that you can’t imagine, I have decided to open the true road, you are free to follow it or to close your eyes.
If you want to know more ask with a post…If I still alive

Amendment for my last post based on visual information
Here is the reason for being targeted by these techniques.
When I did my compulsory service in Iran Army Air Force, I revealed that revolution of Iran in 1970S was performed by some master minds in UK Intelligence service MI6. They created Ayatollahs system (leaders of Iran and its sattelite countries in the region) to control the policy of the region.
They are indirectly ordering the government Of Iran to purchase Eastern powers military equipment to make a synthetic enemy in the region and controlling
the policy of the region.Their agents in Iran were trying to prevent my travel to North America. From when I came to Canada several
Years ago, agents of Iran intelligence service working with one wing of MI6 wings and some of Canadian intelligence agents interested in Iran Ayatollahs system are trying to do brain washing by inducing voice messages ( to read what I am thinking in my mind) and inducing synthetic dreams when I am sleeping to put me in a status to forget the secret.
They also watching trough my eyes when I am awake by RV techniques during the days.
They were planning to return me to Iran to plan my physical death. However, now they are trying to create a brain injury or cancer by electromagnetic waves through inducing scary dreams and reading my mind voice while I am thinking now.Their agents in Canada tried to prevent me to pursue my PhD studies or find a job by blocking my emails. They made some dummies using biomimetic technology and matched my central nervous system with them and they can induce touching senses when I am sleeping using those dummies.
The technology that are using in this operation are combination of UK, Japan and Canadian technology. Their agents are located in Canada, Europe and Iran. The transformation of data is being done by collecting signals of brain nervous cells using cellular phone wireless network in the city and tensferring them to special center
in Canada and sending them to centers in Europe and Iran and viceversa to my brain.

The main goal of operation that was designed by one of MI6 wings was withdrawing US forces from the region and keeping power in the hand of Ayatollahs who have been created and are controlled by some of MI6 masterminds.