“Magic” mushrooms seem to have passed their genes for mind-altering substances around among distant species as a survival mechanism: By making fungus-eating insects “trip,” the bugs become less hungry — and less likely to feast on mushrooms.

That’s the upshot of a paper published Feb. 27 in the journal Evolution Letters by a team of biologists at The Ohio State University and the University of Tennessee.

The researchers studied a group of mushrooms that all produce psilocybin — the chemical agent that causes altered states of consciousness in human beings — but aren’t closely related. The scientists found that the clusters of genes that caused the ‘shrooms to fill themselves with psilocybin were very similar to one another, more similar even than clusters of genes found in closely related species of mushrooms.

That’s a sign, the researchers wrote, that the genes weren’t inherited from a common ancestor, but instead were passed directly between distant species in a phenomenon known as “horizontal gene transfer” or HGT.

HGT isn’t really one process, as the biologist Alita Burmeister explained in the journal Evolution, Medicine and Public Health in 2015. Instead, it’s the term for a group of more or less well-understood processes — like viruses picking up genes from one species and dropping them in another — that can cause groups of genes to jump between species.

However, HGT is believed to be pretty uncommon in complex, mushroom-forming fungi, turning up much more often in single-celled organisms.

When a horizontally transferred gene takes hold and spreads after landing in a new species, the paper’s authors wrote, scientists believe that’s a sign that the gene offered a solution to some crisis the organism’s old genetic code couldn’t solve on its own.

The researchers suggested — but didn’t claim to prove — that the crisis in this case was droves of insects feasting on the defenseless mushrooms. Most of the species the scientists studied grew on animal dung and rotting wood — insect-rich environments (and environments full of opportunities to perform HGT). Psilocybin, the scientists wrote, might suppress insects’ appetites or otherwise induce the bugs to stop munching quite so much mush’.

Quantum physics has some spooky, anti-intuitive effects, but it could also be essential to how actual intuition works, at least in regards to artificial intelligence.

In a new study, researcher Vedran Dunjko and co-authors applied a quantum analysis to a field within artificial intelligence called reinforcement learning, which deals with how to program a machine to make appropriate choices to maximize a cumulative reward. The field is surprisingly complex and must take into account everything from game theory to information theory.

Dunjko and his team found that quantum effects, when applied to reinforcement learning in artificial intelligence systems, could provide quadratic improvements in learning efficiency, reports Phys.org. Exponential improvements might even be possible over short-term performance tasks. The study was published in the journal Physical Review Letters.

“This is, to our knowledge, the first work which shows that quantum improvements are possible in more general, interactive learning tasks,” explained Dunjko. “Thus, it opens up a new frontier of research in quantum machine learning.”

One of the key quantum effects in regards to learning is quantum superposition, which potentially allows a machine to perform many steps simultaneously. Such a system has vastly improved processing power, which allows it to compute more variables when making decisions.

The research is tantalizing, in part because it mirrors some theories about how biological brains might produce higher cognitive states, possibly even being related to consciousness. For instance, some scientists have proposed the idea that our brains pull off their complex calculations by making use of quantum computation.

Could quantum effects unlock consciousness in our machines? Quantum physics isn’t likely to produce HAL from “2001: A Space Odyssey” right away; the most immediate improvements in artificial intelligence will likely come in complex fields such as climate modeling or automated cars. But eventually, who knows?

You probably won’t want to be taking a joyride in an automated vehicle the moment it becomes conscious, if HAL is an example of what to expect.

“While the initial results are very encouraging, we have only begun to investigate the potential of quantum machine learning,” said Dunjko. “We plan on furthering our understanding of how quantum effects can aid in aspects of machine learning in an increasingly more general learning setting. One of the open questions we are interested in is whether quantum effects can play an instrumental role in the design of true artificial intelligence.”

What we think we know about our brains is nothing compared to what we don’t know. This fact is brought into focus by the medical mystery of a 44-year-old French father of two who found out one day that he had most of his brain missing. Instead his skull is mostly full of liquid, with almost no brain tissue left. He has a life-long condition known as hydrocephalus, commonly called “water on the brain” or “water head”. It happens when too much cerebrospinal fluid puts pressure on the brain and the brain’s cavities abnormally increase.

As Axel Cleeremans, a cognitive psychologist at the Université Libre in Brussels, who has lectured about this case, told CBC:

“He was living a normal life. He has a family. He works. His IQ was tested at the time of his complaint. This came out to be 84, which is slightly below the normal range … So, this person is not bright — but perfectly, socially apt”.

The complaint Cleeremans refers to is the original reason the man sought help – he had leg pain. Imagine that – you go to your doctor with a leg cramp and get told that you’re living without most of your brain.

The man continues to live a normal life, being a family man with a wife and kids, while working as a civil servant. All this while having 3 of his main brain cavities filled with only fluid and his brainstem and cerebellum stuck into a small space that they share with a cyst.

What can we learn from this rare case? As Cleeremans points out:

“One of the lessons is that plasticity is probably more pervasive than we thought it was… It is truly incredible that the brain can continue to function, more or less, within the normal range — with probably many fewer neurons than in a typical brain. Second lesson perhaps, if you’re interested in consciousness — that is the manner in which the biological activity of the brain produces awareness… One idea that I’m defending is the idea that awareness depends on the brain’s ability to learn.”

The French man’s story really challenges the idea that consciousness arises in one part of the brain only. Current theories hold that the part of the brain called the thalamus is responsible for our self-awareness. A man living with most of his brain missing does not fit neatly into such hypotheses.

Imagine scanning your Grandma’s brain in sufficient detail to build a mental duplicate. When she passes away, the duplicate is turned on and lives in a simulated video-game universe, a digital Elysium complete with Bingo, TV soaps, and knitting needles to keep the simulacrum happy. You could talk to her by phone just like always. She could join Christmas dinner by Skype. E-Granny would think of herself as the same person that she always was, with the same memories and personality—the same consciousness—transferred to a well regulated nursing home and able to offer her wisdom to her offspring forever after.

And why stop with Granny? You could have the same afterlife for yourself in any simulated environment you like. But even if that kind of technology is possible, and even if that digital entity thought of itself as existing in continuity with your previous self, would you really be the same person?

Is it even technically possible to duplicate yourself in a computer program? The short answer is: probably, but not for a while.

Let’s examine the question carefully by considering how information is processed in the brain, and how it might be translated to a computer.

The first person to grasp the information-processing fundamentals of the brain was the great Spanish neuroscientist, Ramon Y Cajal, who won the 1906 Nobel Prize in Physiology. Before Cajal, the brain was thought to be made of microscopic strands connected in a continuous net or ‘reticulum.’ According to that theory, the brain was different from every other biological thing because it wasn’t made of separate cells. Cajal used new methods of staining brain samples to discover that the brain did have separate cells, which he called neurons. The neurons had long thin strands mixing together like spaghetti—dendrites and axons that presumably carried signals. But when he traced the strands carefully, he realized that one neuron did not grade into another. Instead, neurons contacted each other through microscopic gaps—synapses.

Cajal guessed that the synapses must regulate the flow of signals from neuron to neuron. He developed the first vision of the brain as a device that processes information, channeling signals and transforming inputs into outputs. That realization, the so-called neuron doctrine, is the foundational insight of neuroscience. The last hundred years have been dedicated more or less to working out the implications of the neuron doctrine.

It’s now possible to simulate networks of neurons on a microchip and the simulations have extraordinary computing capabilities. The principle of a neural network is that it gains complexity by combining many simple elements. One neuron takes in signals from many other neurons. Each incoming signal passes over a synapse that either excites the receiving neuron or inhibits it. The neuron’s job is to sum up the many thousands of yes and no votes that it receives every instant and compute a simple decision. If the yes votes prevail, it triggers its own signal to send on to yet other neurons. If the no votes prevail, it remains silent. That elemental computation, as trivial as it sounds, can result in organized intelligence when compounded over enough neurons connected in enough complexity.

The trick is to get the right pattern of synaptic connections between neurons. Artificial neural networks are programmed to adjust their synapses through experience. You give the network a computing task and let it try over and over. Every time it gets closer to a good performance, you give it a reward signal or an error signal that updates its synapses. Based on a few simple learning rules, each synapse changes gradually in strength. Over time, the network shapes up until it can do the task. That deep leaning, as it’s sometimes called, can result in machines that develop spooky, human-like abilities such as face recognition and voice recognition. This technology is already all around us in Siri and in Google.

But can the technology be scaled up to preserve someone’s consciousness on a computer? The human brain has about a hundred billion neurons. The connectional complexity is staggering. By some estimates, the human brain compares to the entire content of the internet. It’s only a matter of time, however, and not very much at that, before computer scientists can simulate a hundred billion neurons. Many startups and organizations, such as the Human Brain project in Europe, are working full-tilt toward that goal. The advent of quantum computing will speed up the process considerably. But even when we reach that threshold where we are able to create a network of a hundred billion artificial neurons, how do we copy your special pattern of connectivity?

No existing scanner can measure the pattern of connectivity among your neurons, or connectome, as it’s called. MRI machines scan at about a millimeter resolution, whereas synapses are only a few microns across. We could kill you and cut up your brain into microscopically thin sections. Then we could try to trace the spaghetti tangle of dendrites, axons, and their synapses. But even that less-than-enticing technology is not yet scalable. Scientists like Sebastian Seung have plotted the connectome in a small piece of a mouse brain, but we are decades away, at least, from technology that could capture the connectome of the human brain.

Assuming we are one day able to scan your brain and extract your complete connectome, we’ll hit the next hurdle. In an artificial neural network, all the neurons are identical. They vary only in the strength of their synaptic interconnections. That regularity is a convenient engineering approach to building a machine. In the real brain, however, every neuron is different. To give a simple example, some neurons have thick, insulated cables that send information at a fast rate. You find these neurons in parts of the brain where timing is critical. Other neurons sprout thinner cables and transmit signals at a slower rate. Some neurons don’t even fire off signals—they work by a subtler, sub-threshold change in electrical activity. All of these neurons have different temporal dynamics.

The brain also uses hundreds of different kinds of synapses. As I noted above, a synapse is a microscopic gap between neurons. When neuron A is active, the electrical signal triggers a spray of chemicals—neurotransmitters—which cross the synapse and are picked up by chemical receptors on neuron B. Different synapses use different neurotransmitters, which have wildly different effects on the receiving neuron, and are re-absorbed after use at different rates. These subtleties matter. The smallest change to the system can have profound consequences. For example, Prozac works on people’s moods because it subtly adjusts the way particular neurotransmitters are reabsorbed after being released into synapses.

Although Cajal didn’t realize it, some neurons actually do connect directly, membrane to membrane, without a synaptic space between. These connections, called gap junctions, work more quickly than the regular kind and seem to be important in synchronizing the activity across many neurons.

Other neurons act like a gland. Instead of sending a precise signal to specific target neurons, they release a chemical soup that spreads and affects a larger area of the brain over a longer time.

I could go on with the biological complexity. These are just a few examples.

A student of artificial intelligence might argue that these complexities don’t matter. You can build an intelligent machine with simpler, more standard elements, ignoring the riot of biological complexity. And that is probably true. But there is a difference between building artificial intelligence and recreating a specific person’s mind.

If you want a copy of your brain, you will need to copy its quirks and complexities, which define the specific way you think. A tiny maladjustment in any of these details can result in epilepsy, hallucinations, delusions, depression, anxiety, or just plain unconsciousness. The connectome by itself is not enough. If your scan could determine only which neurons are connected to which others, and you re-created that pattern in a computer, there’s no telling what Frankensteinian, ruined, crippled mind you would create.

To copy a person’s mind, you wouldn’t need to scan anywhere near the level of individual atoms. But you would need a scanning device that can capture what kind of neuron, what kind of synapse, how large or active of a synapse, what kind of neurotransmitter, how rapidly the neurotransmitter is being synthesized and how rapidly it can be reabsorbed. Is that impossible? No. But it starts to sound like the tech is centuries in the future rather than just around the corner.

Even if we get there quicker, there is still another hurdle. Let’s suppose we have the technology to make a simulation of your brain. Is it truly conscious, or is it merely a computer crunching numbers in imitation of your behavior?

A half-dozen major scientific theories of consciousness have been proposed. In all of them, if you could simulate a brain on a computer, the simulation would be as conscious as you are. In the Attention Schema Theory, consciousness depends on the brain computing a specific kind of self-descriptive model. Since this explanation of consciousness depends on computation and information, it would translate directly to any hardware including an artificial one.

In another approach, the Global Workspace Theory, consciousness ignites when information is combined and shared globally around the brain. Again, the process is entirely programmable. Build that kind of global processing network, and it will be conscious.

In yet another theory, the Integrated Information Theory, consciousness is a side product of information. Any computing device that has a sufficient density of information, even an artificial device, is conscious.

Many other scientific theories of consciousness have been proposed, beyond the three mentioned here. They are all different from each other and nobody yet knows which one is correct. But in every theory grounded in neuroscience, a computer-simulated brain would be conscious. In some mystical theories and theories that depend on a loose analogy to quantum mechanics, consciousness would be more difficult to create artificially. But as a neuroscientist, I am confident that if we ever could scan a person’s brain in detail and simulate that architecture on a computer, then the simulation would have a conscious experience. It would have the memories, personality, feelings, and intelligence of the original.

And yet, that doesn’t mean we’re out of the woods. Humans are not brains in vats. Our cognitive and emotional experience depends on a brain-body system embedded in a larger environment. This relationship between brain function and the surrounding world is sometimes called “embodied cognition.” The next task therefore is to simulate a realistic body and a realistic world in which to embed the simulated brain. In modern video games, the bodies are not exactly realistic. They don’t have all the right muscles, the flexibility of skin, or the fluidity of movement. Even though some of them come close, you wouldn’t want to live forever in a World of Warcraft skin. But the truth is, a body and world are the easiest components to simulate. We already have the technology. It’s just a matter of allocating enough processing power.

In my lab, a few years ago, we simulated a human arm. We included the bone structure, all the fifty or so muscles, the slow twitch and fast twitch fibers, the tendons, the viscosity, the forces and inertia. We even included the touch receptors, the stretch receptors, and the pain receptors. We had a working human arm in digital format on a computer. It took a lot of computing power, and on our tiny machines it couldn’t run in real time. But with a little more computational firepower and a lot bigger research team we could have simulated a complete human body in a realistic world.

Let’s presume that at some future time we have all the technological pieces in place. When you’re close to death we scan your details and fire up your simulation. Something wakes up with the same memories and personality as you. It finds itself in a familiar world. The rendering is not perfect, but it’s pretty good. Odors probably don’t work quite the same. The fine-grained details are missing. You live in a simulated New York City with crowds of fellow dead people but no rats or dirt. Or maybe you live in a rural setting where the grass feels like Astroturf. Or you live on the beach in the sun, and every year an upgrade makes the ocean spray seem a little less fake. There’s no disease. No aging. No injury. No death unless the operating system crashes. You can interact with the world of the living the same way you do now, on a smart phone or by email. You stay in touch with living friends and family, follow the latest elections, watch the summer blockbusters. Maybe you still have a job in the real world as a lecturer or a board director or a comedy writer. It’s like you’ve gone to another universe but still have contact with the old one.

But is it you? Did you cheat death, or merely replace yourself with a creepy copy?

I can’t pretend to have a definitive answer to this philosophical question. Maybe it’s a matter of opinion rather than anything testable or verifiable. To many people, uploading is simply not an afterlife. No matter how accurate the simulation, it wouldn’t be you. It would be a spooky fake.

My own perspective borrows from a basic concept in topology. Imagine a branching Y. You’re born at the bottom of the Y and your lifeline progresses up the stalk. The branch point is the moment your brain is scanned and the simulation has begun. Now there are two of you, a digital one (let’s say the left branch) and a biological one (the right branch). They both inherit the memories, personality, and identity of the stalk. They both think they’re you. Psychologically, they’re equally real, equally valid. Once the simulation is fired up, the branches begin to diverge. The left branch accumulates new experiences in a digital world. The right branch follows a different set of experiences in the physical world.

Is it all one person, or two people, or a real person and a fake one? All of those and none of those. It’s a Y.

The stalk of the Y, the part from before the split, gains immortality. It lives on in the digital you, just like your past self lives on in your present self. The right hand branch, the post-split biological branch, is doomed to die. That’s the part that feels gypped by the technology.

So let’s assume that those of us who live in biological bodies get over this injustice, and in a century or three we invent a digital afterlife. What could possibly go wrong?

Well, for one, there are limited resources. Simulating a brain is computationally expensive. As I noted before, by some estimates the amount of information in the entire internet at the present time is approximately the same as in a single human brain. Now imagine the resources required to simulate the brains of millions or billions of dead people. It’s possible that some future technology will allow for unlimited RAM and we’ll all get free service. The same way we’re arguing about health care now, future activists will chant, “The afterlife is a right, not a privilege!” But it’s more likely that a digital afterlife will be a gated community and somebody will have to choose who gets in. Is it the rich and politically connected who live on? Is it Trump? Is it biased toward one ethnicity? Do you get in for being a Nobel laureate, or for being a suicide bomber in somebody’s hideous war? Just think how coercive religion can be when it peddles the promise of an invisible afterlife that can’t be confirmed. Now imagine how much more coercive a demagogue would be if he could dangle the reward of an actual, verifiable afterlife. The whole thing is an ethical nightmare.

And yet I remain optimistic. Our species advances every time we develop a new way to share information. The invention of writing jump-started our advanced civilizations. The computer revolution and the internet are all about sharing information. Think about the quantum leap that might occur if instead of preserving words and pictures, we could preserve people’s actual minds for future generations. We could accumulate skill and wisdom like never before. Imagine a future in which your biological life is more like a larval stage. You grow up, learn skills and good judgment along the way, and then are inducted into an indefinite digital existence where you contribute to stability and knowledge. When all the ethical confusion settles, the benefits may be immense. No wonder people like Ray Kurzweil refer to this kind of technological advance as a singularity. We can’t even imagine how our civilization will look on the other side of that change.

Some users of LSD say one of the most profound parts of the experience is a deep oneness with the universe. The hallucinogenic drug might be causing this by blurring boundaries in the brain, too.

The sensation that the boundaries between yourself and the world around you are erasing correlates to changes in brain connectivity while on LSD, according to a study published Wednesday in Current Biology. Scientists gave 15 volunteers either a drop of acid or a placebo and slid them into an MRI scanner to monitor brain activity.

After about an hour, when the high begins peaking, the brains of people on acid looked markedly different than those on the placebo. For those on LSD, activity in certain areas of their brain, particularly areas rich in neurons associated with serotonin, ramped up.

Their sensory cortices, which process sensations like sight and touch, became far more connected than usual to the frontal parietal network, which is involved with our sense of self. “The stronger that communication, the stronger the experience of the dissolution [of self],” says Enzo Tagliazucchi, the lead author and a researcher at the Netherlands Institute for Neuroscience.

Tagliazucchi speculates that what’s happening is a confusion of information. Your brain on acid, flooded with signals crisscrossing between these regions, begins muddling the things you see, feel, taste or hear around you with you. This can create the perception that you and, say, the pizza you’re eating are no longer separate entities. You are the pizza and the world beyond the windowsill. You are the church and the tree and the hill.

Albert Hofmann, the discoverer of LSD, described this in his book LSD: My Problem Child. “A portion of the self overflows into the outer world, into objects, which begin to live, to have another, a deeper meaning,” he wrote. He felt the world would be a better place if more people understood this. “What is needed today is a fundamental re-experience of the oneness of all living things.”

The sensation is neurologically similar to synesthesia, Tagliazucchi thinks. “In synesthesia, you mix up sensory modalities. You can feel the color of a sound or smell the sound. This happens in LSD, too,” Tagliazucchi says. “And ego dissolution is a form of synesthesia, but it’s a synesthesia of areas of brain with consciousness of self and the external environment. You lose track of which is which.”

Tagliazucchi and other researchers also measured the volunteers’ brain electrical activity with another device. Our brains normally generate a regular rhythm of electrical activity called the alpha rhythm, which links to our brain’s ability to suppress irrelevant activity. But in a different paper published on Monday in the Proceedings of the National Academy of Sciences, he and several co-authors show that LSD weakens the alpha rhythm. He thinks this weakening could make the hallucinations seem more real.

The idea is intriguing if still somewhat speculative, says Dr. Charles Grob, a psychiatrist at the Harbor-UCLA Medical Center who was not involved with the work. “They may genuinely be on to something. This should really further our understanding of the brain and consciousness.” And, he says, the work highlights hallucinogens’ powerful therapeutic potential.

The altered state of reality that comes with psychedelics might enhance psychotherapy, Grob thinks. “Hallucinogens are a catalyst,” he says. “In well-prepared subjects, you might elicit powerful, altered states of consciousness. [That] has been predicative of positive therapeutic outcomes.”

In recent years, psychedelics have been trickling their way back to psychiatric research. LSD was considered a good candidate for psychiatric treatment until 1966, when it was outlawed and became very difficult to obtain for study. Grob has done work testing the treatment potential of psilocybin, the active compound in hallucinogenic mushrooms.

He imagines a future where psychedelics are commonly used to treat a range of conditions. “[There could] be a peaceful room attractively fixed up with nice paintings, objects to look at, fresh flowers, a chair or recliner for the patient and two therapists in the room,” he muses. “A safe container for that individual as they explore deep inner space, inner terrain.”

Grob believes the right candidate would benefit greatly from LSD or other hallucinogen therapy, though he cautions that bad experiences can still happen for some on the drugs. Those who are at risk for schizophrenia may want to avoid psychedelics, Tagliazucchi says. “There has been evidence saying what could happen is LSD could trigger the disease and turn it into full-fledged schizophrenia,” he says. “There is a lot of debate around this. It’s an open topic.”

Tagliazucchi thinks that this particular ability of psychedelics to evoke a sense of dissolution of self and unity with the external environment has already helped some patients. “Psilocybin has been used to treat anxiety with terminal cancer patients,” he says. “One reason why they felt so good after treatment is the ego dissolution is they become part of something larger: the universe. This led them to a new perspective on their death.”

Telepathy, 2015: At the Center for Sensorimotor Neural Engineering of the University of Washington, a young woman dons an electroencephalogram cap, studded with electrodes that can read the minute fluctuations of voltage across her brain. She is playing a game, answering questions by turning her gaze to one of two strobe lights labeled “yes” and “no.” The “yes” light is flashing at 13 times a second, the “no” at 12, and the difference is too small for her to perceive, but sufficient for a computer to detect in the firing of neurons in her visual cortex. If the computer determines she is looking at the “yes” light, it sends a signal to a room in another building, where another woman is sitting with a magnetic coil positioned behind her head. A “yes” signal activates the magnet, causing a brief disturbance in the second subject’s visual field, a virtual flash (a “phosphene”) that she describes as akin to the appearance of heat lightning on the horizon. In this way, the first woman’s answers are conveyed to another person across the campus, going “Star Trek” one better: exchanging information between two minds that aren’t even in the same place.

For nearly all of human history, only the five natural senses were known to serve as a way into the brain, and language and gesture as the channels out. Now researchers are breaching those boundaries of the mind, moving information in and out and across space and time, manipulating it and potentially enhancing it. This experiment and others have been a “demonstration to get the conversation started,” says researcher Rajesh Rao, who conducted it along with his colleague Andrea Stocco. The conversation, which will likely dominate neuroscience for much of this century, holds the promise of new technology that will dramatically affect how we treat dementia, stroke and spinal cord injuries. But it will also be about the ethics of powerful new tools to enhance thinking, and, ultimately, the very nature of consciousness and identity.

That new study grew out of Rao’s work in “brain-computer interfaces,” which process neural impulses into signals that can control external devices. Using an EEG to control a robot that can navigate a room and pick up objects—which Rao and his colleagues demonstrated as far back as 2008—may be commonplace someday for quadriplegics.

In what Rao says was the first instance of a message sent directly from one human brain to another, he enlisted Stocco to help play a basic “Space Invaders”-type game. As one person watched the attack on a screen and communicated by using only thought the best moment to fire, the other got a magnetic impulse that caused his hand, without conscious effort, to press a button on a keyboard. After some practice, Rao says, they got quite good at it.

“That’s nice,” I said, when he described the procedure to me. “Can you get him to play the piano?”

Rao sighed. “Not with anything we’re using now.”

For all that science has studied and mapped the brain in recent decades, the mind remains a black box. A famous 1974 essay by the philosopher Thomas Nagel asked, “What Is It Like to Be a Bat?” and concluded that we will never know; another consciousness—another person’s, let alone a member of another species—can never be comprehended or accessed. For Rao and a few others to open that door a tiny crack, then, is a notable achievement, even if the work has mostly underscored how big a challenge it is, both conceptually and technologically.

The computing power and the programming are up to the challenge; the problem is the interface between brain and computer, and especially the one that goes in the direction from computer to brain. How do you deliver a signal to the right group of nerve cells among the estimated 86 billion in a human brain? The most efficient approach is an implanted transceiver that can be hard-wired to stimulate small regions of the brain, even down to a single neuron. Such devices are already in use for “deep brain stimulation,” a technique for treating patients with Parkinson’s and other disorders with electrical impulses. But it’s one thing to perform brain surgery for an incurable disease, and something else to do it as part of an experiment whose benefits are speculative at best.

So Rao used a technique that does not involve opening the skull, a fluctuating magnetic field to induce a tiny electric current in a region of the brain. It appears to be safe—his first volunteer was his collaborator, Stocco—but it is a crude mechanism. The smallest area that can be stimulated in this way, Rao says, is not quite half an inch across. This limits its application to gross motor movements, such as hitting a button, or simple yes-or-no communication.

Another way to transmit information, called focused ultrasound, appears to be capable of stimulating a region of the brain as small as a grain of rice. While the medical applications for ultrasound, such as imaging and tissue ablation, use high frequencies, from 800 kilohertz up to the megahertz range, a team led by Harvard radiologist Seung-Schik Yoo found that a frequency of 350 kilohertz works well, and apparently safely, to send a signal to the brain of a rat. The signal originated with a human volunteer outfitted with an EEG, which sampled his brainwaves; when he focused on a specific pattern of lights on a computer screen, a computer sent an ultrasound signal to the rat, which moved his tail in response. Yoo says the rat showed no ill effects, but the safety of focused ultrasound on the human brain is unproven. Part of the problem is that, unlike magnetic stimulation, the mechanism by which ultrasound waves—a form of mechanical energy—creates an electric potential isn’t fully understood. One possibility is that it operates indirectly by “popping” open the vesicles, or sacs, within the cells of the brain, flooding them with neurotransmitters, like delivering a shot of dopamine to exactly the right area. Alternatively, the ultrasound could induce cavitation—bubbling—in the cell membrane, changing its electrical properties. Yoo suspects that the brain contains receptors for mechanical stimulation, including ultrasound, which have been largely overlooked by neuroscientists. Such receptors would account for the phenomenon of “seeing stars,” or flashes of light, from a blow to the head, for instance. If focused ultrasound is proven safe, and becomes a feasible approach to a computer-brain interface, it would open up a wide range of unexplored—in fact, barely imagined—possibilities.

Direct verbal communication between individuals—a more sophisticated version of Rao’s experiment, with two connected people exchanging explicit statements just by thinking them—is the most obvious application, but it’s not clear that a species possessing language needs a more technologically advanced way to say “I’m running late,” or even “I love you.” John Trimper, an Emory University doctoral candidate in psychology, who has written about the ethical implications of brain-to-brain interfaces, speculates that the technology, “especially through wireless transmissions, could eventually allow soldiers or police—or criminals—to communicate silently and covertly during operations.” That would be in the distant future. So far, the most content-rich message sent brain-to-brain between humans traveled from a subject in India to one in Strasbourg, France. The first message, laboriously encoded and decoded into binary symbols by a Barcelona-based group, was “hola.” With a more sophisticated interface one can imagine, say, a paralyzed stroke victim communicating to a caregiver—or his dog. Still, if what he’s saying is, “Bring me the newspaper,” there are, or will be soon, speech synthesizers—and robots—that can do that. But what if the person is Stephen Hawking, the great physicist afflicted by ALS, who communicates by using a cheek muscle to type the first letters of a word? The world could surely benefit from a direct channel to his mind.

Maybe we’re still thinking too small. Maybe an analog to natural language isn’t the killer app for a brain-to-brain interface. Instead, it must be something more global, more ambitious—information, skills, even raw sensory input. What if medical students could download a technique directly from the brain of the world’s best surgeon, or if musicians could directly access the memory of a great pianist? “Is there only one way of learning a skill?” Rao muses. “Can there be a shortcut, and is that cheating?” It doesn’t even have to involve another human brain on the other end. It could be an animal—what would it be like to experience the world through smell, like a dog—or by echolocation, like a bat? Or it could be a search engine. “It’s cheating on an exam if you use your smartphone to look things up on the Internet,” Rao says, “but what if you’re already connected to the Internet through your brain? Increasingly the measure of success in society is how quickly we access, digest and use the information that’s out there, not how much you can cram into your own memory. Now we do it with our fingers. But is there anything inherently wrong about doing it just by thinking?”

Or, it could be your own brain, uploaded at some providential moment and digitally preserved for future access. “Let’s say years later you have a stroke,” says Stocco, whose own mother had a stroke in her 50s and never walked again. “Now, you go to rehab and it’s like learning to walk all over again. Suppose you could just download that ability into your brain. It wouldn’t work perfectly, most likely, but it would be a big head start on regaining that ability.”

Miguel Nicolelis, a creative Duke neuroscientist and a mesmerizing lecturer on the TED Talks circuit, knows the value of a good demonstration. For the 2014 World Cup, Nicolelis—a Brazilian-born soccer aficionado—worked with others to build a robotic exoskeleton controlled by EEG impulses, enabling a young paraplegic man to deliver the ceremonial first kick. Much of his work now is on brain-to-brain communication, especially in the highly esoteric techniques of linking minds to work together on a problem. The minds aren’t human ones, so he can use electrode implants, with all the advantages that conveys.

One of his most striking experiments involved a pair of lab rats, learning together and moving in synchrony as they communicated via brain signals. The rats were trained in an enclosure with two levers and a light above each. The left- or right-hand light would flash, and the rats learned to press the corresponding lever to receive a reward. Then they were separated, and each fitted with electrodes to the motor cortex, connected via computers that sampled brain impulses from one rat (the “encoder”), and sent a signal to a second (the “decoder”). The “encoder” rat would see one light flash—say, the left one—and push the left-hand lever for his reward; in the other box, both lights would flash, so the “decoder” wouldn’t know which lever to push—but on receiving a signal from the first rat, he would go to the left as well.

Nicolelis added a clever twist to this demonstration. When the decoder rat made the correct choice, he was rewarded, and the encoder got a second reward as well. This served to reinforce and strengthen the (unconscious) neural processes that were being sampled in his brain. As a result, both rats became more accurate and faster in their responses—“a pair of interconnected brains…transferring information and collaborating in real time.” In another study, he wired up three monkeys to control a virtual arm; each could move it in one dimension, and as they watched a screen they learned to work together to manipulate it to the correct location. He says he can imagine using this technology to help a stroke victim regain certain abilities by networking his brain with that of a healthy volunteer, gradually adjusting the proportions of input until the patient’s brain is doing all the work. And he believes this principle could be extended indefinitely, to enlist millions of brains to work together in a “biological computer” that tackled questions that could not be posed, or answered, in binary form. You could ask this network of brains for the meaning of life—you might not get a good answer, but unlike a digital computer, “it” would at least understand the question. At the same time, Nicolelis criticizes efforts to emulate the mind in a digital computer, no matter how powerful, saying they’re “bogus, and a waste of billions of dollars.” The brain works by different principles, modeling the world by analogy. To convey this, he proposes a new concept he calls “Gödelian information,” after the mathematician Kurt Gödel; it’s an analog representation of reality that cannot be reduced to bytes, and can never be captured by a map of the connections between neurons (“Upload Your Mind,” see below). “A computer doesn’t generate knowledge, doesn’t perform introspection,” he says. “The content of a rat, monkey or human brain is much richer than we could ever simulate by binary processes.”

The cutting edge of this research involves actual brain prostheses. At the University of Southern California, Theodore Berger is developing a microchip-based prosthesis for the hippocampus, the part of the mammal­ian brain that processes short-term impressions into long-term memories. He taps into the neurons on the input side, runs the signal through a program that mimics the transformations the hippocampus normally performs, and sends it back into the brain. Others have used Berger’s technique to send the memory of a learned behavior from one rat to another; the second rat then learned the task in much less time than usual. To be sure, this work has only been done in rats, but because degeneration of the hippocampus is one of the hallmarks of dementia in human beings, the potential of this research is said to be enormous.

Given the sweeping claims for the future potential of brain-to-brain communication, it’s useful to list some of the things that are not being claimed. There is, first, no implication that humans possess any form of natural (or supernatural) telepathy; the voltages flickering inside your skull just aren’t strong enough to be read by another brain without electronic enhancement. Nor can signals (with any technology we possess, or envision) be transmitted or received surreptitiously, or at a distance. The workings of your mind are secure, unless you give someone else the key by submitting to an implant or an EEG. It is, however, not too soon to start considering the ethical implications of future developments, such as the ability to implant thoughts in other people or control their behavior (prisoners, for example) using devices designed for those purposes. “The technology is outpacing the ethical discourse at this time,” Emory’s Trimper says, “and that’s where things get dicey.” Consider that much of the brain traffic in these experiments—and certainly anything like Nicolelis’ vision of hundreds or thousands of brains working together—involves communicating over the Internet. If you’re worried now about someone hacking your credit card information, how would you feel about sending the contents of your mind into the cloud?There’s another track, though, on which brain-to-brain communication is being studied. Uri Hasson, a Princeton neuroscientist, uses functional magnetic resonance imaging to research how one brain influences another, how they are coupled in an intricate dance of cues and feedback loops. He is focusing on a communication technique that he considers far superior to EEGs used with transcranial magnetic stimulation, is noninvasive and safe and requires no Internet connection. It is, of course, language.

The main theory of psychedelics, first fleshed out by a Swiss researcher named Franz Vollenweider, is that drugs like LSD and psilocybin, the active ingredient in “magic” mushrooms, tune down the thalamus’ activity. Essentially, the thalamus on a psychedelic drug lets unprocessed information through to consciousness, like a bad email spam filter. “Colors become brighter , people see things they never noticed before and make associations that they never made before,” Sewell said.

LSD, or acid, and its mind-bending effects have been made famous by pop culture hits like “Fear and Loathing in Las Vegas,” a film about the psychedelic escapades of writer Hunter S. Thompson. Oversaturated colors, swirling walls and intense emotions all supposedly come into play when you’re tripping. But how does acid make people trip?

Life’s Little Mysteries asked Andrew Sewell, a Yale psychiatrist and one of the few U.S.-based psychedelic drug researchers, to explain why LSD short for lysergic acid diethylamide does what it does to the brain.

His explanation begins with a brief rundown of how the brain processes information under normal circumstances. It all starts in the thalamus, a node perched on top of the brain stem, right smack dab in the middle of the brain. “Most sensory impressions are routed through the thalamus, which acts as a gatekeeper, determining what’s relevant and what isn’t and deciding where the signals should go,” Sewell said.

“Consequently, your perception of the world is governed by a combination of ‘bottom-up’ processing, starting … with incoming signals, combined with ‘top-down’ processing, in which selective filters are applied by your brain to cut down the overwhelming amount of information to a more manageable and relevant subset that you can then make decisions about.

“In other words, people tend to see what they’ve been trained to see, and hear what they’ve been trained to hear.”

The main theory of psychedelics, first fleshed out by a Swiss researcher named Franz Vollenweider, is that drugs like LSD and psilocybin, the active ingredient in “magic” mushrooms, tune down the thalamus’ activity. Essentially, the thalamus on a psychedelic drug lets unprocessed information through to consciousness, like a bad email spam filter. “Colors become brighter , people see things they never noticed before and make associations that they never made before,” Sewell said.

n a recent paper advocating the revival of psychedelic drug research, psychiatrist Ben Sessa of the University of Bristol in England explained the benefits that psychedelics lend to creativity. “A particular feature of the experience is … a general increase in complexity and openness, such that the usual ego-bound restraints that allow humans to accept given pre-conceived ideas about themselves and the world around them are necessarily challenged. Another important feature is the tendency for users to assign unique and novel meanings to their experience together with an appreciation that they are part of a bigger, universal cosmic oneness.”

But according to Sewell, these unique feelings and experiences come at a price: “disorganization, and an increased likelihood of being overwhelmed.” At least until the drugs wear off, and then you’re left just trying to make sense of it all.