Quick Links

Club PA 2.0 has arrived! If you'd like to access some extra PA content and help support the forums, check it out at patreon.com/ClubPA

The image size limit has been raised to 1mb! Anything larger than that should be linked to. This is a HARD limit, please do not abuse it.

Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.

Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

The Human Brain Project has been officially selected as one of the European Commission’s two FET Flagship projects. The new project will unite European efforts to address one of the greatest challenges of modern science: understanding the human brain.

The goal of the Human Brain Project is to pull together all our existing knowledge about the human brain and to reconstruct the brain, piece by piece, in supercomputer-based models and simulations.

The models offer the prospect of a new understanding of the human brain and its diseases and of completely new computing and robotic technologies.

The Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros.

More than 80 European and international research institutions are involved in the project, including UCL groups led by Professor Alex Thomson (UCL School of Pharmacy), Professor Neil Burgess (UCL Institute of Cognitive Neuroscience) and Professor John Ashburner (UCL Institute of Neurology).

The project will also associate some important North American and Japanese partners. It will be coordinated at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, by neuroscientist Henry Markram with co-directors Karlheinz Meier of Heidelberg University, Germany, and Richard Frackowiak (a former UCL Vice-Provost) from the Centre Hospitalier Universitaire Vaudois (CHUV) and the University of Lausanne (UNIL).

Professor Alex Thomson, who studies the synaptic circuitry that underpins many models of the brain, said: “The brain is both the most exquisitely beautiful and efficient machine and the most frustratingly difficult to understand. Only a multi-dimensional approach can hope to render its complexity accessible to therapy and imitation.”

Professor Malcolm Grant, UCL President & Provost, said: “By funding the Human Brain Project the European Commission has proved their commitment to funding large scale science research. UCL’s role in the Human Brain Project will strengthen and further develop the world-leading research already underway here in the fields of neurology and neuroscience.”

Researchers hope to better understand the energy efficiency of the human brain, and use this knowledge towards the development of biologically inspired computers. Such devices could have a major impact on industry.

Another major goal of the Human Brain Project is to generate tools and infrastructure for the research community and catalyse the development of new treatments for brain disease.

Clinicians involved with the project will study patients with brain diseases, which cost the European Union more than €800 billion each year.

The Human Brain Project is the world's largest brain research programme and more than 20 UK research teams in academia and industry will be involved in the start of the project.

The selection of the Human Brain Project as a FET Flagship is the result of more than three years of preparation and a rigorous and severe evaluation by a large panel of independent, high profile scientists, chosen by the European Commission.

In the coming months, the partners will negotiate a detailed agreement with the Community for the initial first two and a half year ramp-up phase (2013-mid 2016). The project will begin work in the closing months of 2013.

And, more recently, in the US, there's rumblings that the Obama administration is going to announce an initiative to fund such research to the tune of $300 Million per year for 10 years.

The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics.

The project, which the administration has been looking to unveil as early as March, will include federal agencies, private foundations and teams of neuroscientists and nanoscientists in a concerted effort to advance the knowledge of the brain’s billions of neurons and gain greater insights into perception, actions and, ultimately, consciousness.

Scientists with the highest hopes for the project also see it as a way to develop the technology essential to understanding diseases like Alzheimer’s and Parkinson’s, as well as to find new therapies for a variety of mental illnesses.

Moreover, the project holds the potential of paving the way for advances in artificial intelligence.

The project, which could ultimately cost billions of dollars, is expected to be part of the president’s budget proposal next month. And, four scientists and representatives of research institutions said they had participated in planning for what is being called the Brain Activity Map project.

The details are not final, and it is not clear how much federal money would be proposed or approved for the project in a time of fiscal constraint or how far the research would be able to get without significant federal financing.

In his State of the Union address, President Obama cited brain research as an example of how the government should “invest in the best ideas.”

“Every dollar we invested to map the human genome returned $140 to our economy — every dollar,” he said. “Today our scientists are mapping the human brain to unlock the answers to Alzheimer’s. They’re developing drugs to regenerate damaged organs, devising new materials to make batteries 10 times more powerful. Now is not the time to gut these job-creating investments in science and innovation.”

Story C. Landis, the director of the National Institute of Neurological Disorders and Stroke, said that when she heard Mr. Obama’s speech, she thought he was referring to an existing National Institutes of Health project to map the static human brain. “But he wasn’t,” she said. “He was referring to a new project to map the active human brain that the N.I.H. hopes to fund next year.”

Indeed, after the speech, Francis S. Collins, the director of the National Institutes of Health, may have inadvertently confirmed the plan when he wrote in a Twitter message: “Obama mentions the #NIH Brain Activity Map in #SOTU.”

A spokesman for the White House Office of Science and Technology Policy declined to comment about the project.

The initiative, if successful, could provide a lift for the economy. “The Human Genome Project was on the order of about $300 million a year for a decade,” said George M. Church, a Harvard University molecular biologist who helped create that project and said he was helping to plan the Brain Activity Map project. “If you look at the total spending in neuroscience and nanoscience that might be relative to this today, we are already spending more than that. We probably won’t spend less money, but we will probably get a lot more bang for the buck.”

Scientists involved in the planning said they hoped that federal financing for the project would be more than $300 million a year, which if approved by Congress would amount to at least $3 billion over the 10 years.

The Human Genome Project cost $3.8 billion. It was begun in 1990 and its goal, the mapping of the complete human genome, or all the genes in human DNA, was achieved ahead of schedule, in April 2003. A federal government study of the impact of the project indicated that it returned $800 billion by 2010.

The advent of new technology that allows scientists to identify firing neurons in the brain has led to numerous brain research projects around the world. Yet the brain remains one of the greatest scientific mysteries.

Composed of roughly 100 billion neurons that each electrically “spike” in response to outside stimuli, as well as in vast ensembles based on conscious and unconscious activity, the human brain is so complex that scientists have not yet found a way to record the activity of more than a small number of neurons at once, and in most cases that is done invasively with physical probes.

But a group of nanotechnologists and neuroscientists say they believe that technologies are at hand to make it possible to observe and gain a more complete understanding of the brain, and to do it less intrusively.

In June in the journal Neuron, six leading scientists proposed pursuing a number of new approaches for mapping the brain.

One possibility is to build a complete model map of brain activity by creating fleets of molecule-size machines to noninvasively act as sensors to measure and store brain activity at the cellular level. The proposal envisions using synthetic DNA as a storage mechanism for brain activity.

“Not least, we might expect novel understanding and therapies for diseases such as schizophrenia and autism,” wrote the scientists, who include Dr. Church; Ralph J. Greenspan, the associate director of the Kavli Institute for Brain and Mind at the University of California, San Diego; A. Paul Alivisatos, the director of the Lawrence Berkeley National Laboratory; Miyoung Chun, a molecular geneticist who is the vice president for science programs at the Kavli Foundation; Michael L. Roukes, a physicist at the California Institute of Technology; and Rafael Yuste, a neuroscientist at Columbia University.

The Obama initiative is markedly different from a recently announced European project that will invest 1 billion euros in a Swiss-led effort to build a silicon-based “brain.” The project seeks to construct a supercomputer simulation using the best research about the inner workings of the brain.

Critics, however, say the simulation will be built on knowledge that is still theoretical, incomplete or inaccurate.

The Obama proposal seems to have evolved in a manner similar to the Human Genome Project, scientists said. “The genome project arguably began in 1984, where there were a dozen of us who were kind of independently moving in that direction but didn’t really realize there were other people who were as weird as we were,” Dr. Church said.

However, a number of scientists said that mapping and understanding the human brain presented a drastically more significant challenge than mapping the genome.

“It’s different in that the nature of the question is a much more intricate question,” said Dr. Greenspan, who said he is involved in the brain project. “It was very easy to define what the genome project’s goal was. In this case, we have a more difficult and fascinating question of what are brainwide activity patterns and ultimately how do they make things happen?”

The initiative will be organized by the Office of Science and Technology Policy, according to scientists who have participated in planning meetings.

The National Institutes of Health, the Defense Advanced Research Projects Agency and the National Science Foundation will also participate in the project, the scientists said, as will private foundations like the Howard Hughes Medical Institute in Chevy Chase, Md., and the Allen Institute for Brain Science in Seattle.

A meeting held on Jan. 17 at the California Institute of Technology was attended by the three government agencies, as well as neuroscientists, nanoscientists and representatives from Google, Microsoft and Qualcomm. According to a summary of the meeting, it was held to determine whether computing facilities existed to capture and analyze the vast amounts of data that would come from the project. The scientists and technologists concluded that they did.

They also said that a series of national brain “observatories” should be created as part of the project, like astronomical observatories.

The front page of Monday's New York Times revealed the Obama Administration may soon seek billions of dollars from Congress to map the human brain, in an ambitious project many have claimed will do for neuroscience what the Human Genome Project has done for genetics.

Details probably won't emerge until March at the earliest, but it's a safe bet the Administration's plan will resemble a Brain Activity Map (aka "BAM") project outlined last year in the journal Neuron. BAM is an acronym you'll probably be hearing a lot in the weeks and months to come — so let's talk about what the BAM project is, what it isn't, and why it's raising both interest and eyebrows throughout the scientific community.

What is a BAM?
Your brain is vast on a cosmic scale. Billions upon billions of neurons communicate with one another via trillions of connections, giving rise to what amounts to a network of networks. Widely adopted (but by no means universally accepted) theories posit that these neural networks are the wellsprings of such complex processes as perception and action. Many neuroscientists believe that a detailed BAM could reveal valuable clues about these and other cognitive functions, and perhaps human consciousness, itself. Columbia University's Rafael Yuste is one of them.

Yuste is a co-author of a widely circulated BAM project proposal published last July in the journal Neuron, and one of the scientists whose advice the Obama Administration has sought in planning what the NYT characterized as a ten-year, multi-billion dollar undertaking. In an interview with io9, Yuste explained that the ultimate goal of the project is to create what he calls a functional map of the active human brain. "You could argue, in a very simplistic way, that everything that we are, our whole mental world, amounts to nothing more than neural circuits firing [in patterns] throughout the brain," Yuste said. By mapping circuit activity, Yuste thinks researchers can "discover patterns that are the physical representation and origin of mental states — of thoughts, for example, or memories."

Said map would amount to much more than what is often referred to as a "static" model — a wiring diagram that charts how neurons connect with one another. A "functional" model, Yuste emphasized, would go much further, by allowing researchers to see not just the connections between the tens of billions of neurons that comprise a human brain, but the individual action of every cell in a given neural circuit. George Church, a molecular geneticist at Harvard University, told io9 that it's like the difference between knowing the spatial distribution of a city's telephone wires and knowing where, when, and how those wires are transmitting messages. (An early architect and longtime ambassador of the Human Genome Project, Church has been tapped to work on the BAM project, and is a co-author on last year's white paper.)

Creating such a map will require nothing short of a technological revolution in the field of neuroscience. It is currently possible to insert electrodes into the brain that can both monitor and induce brain activity. But the resolution offered by these and less invasive techniques is poor. That means the first step toward a BAM will be to develop tools that can actually record the individual activity of every neuron in a brain circuit. The second step will be to create tools that can influence the activity of individual neurons.

In this sense, Yuste said, BAM "is essentially a technical development project," aimed at devising techniques that can both measure and stimulate neurons with exquisite spacial specificity. Development and implementation of these tools would, of course, begin on smaller organisms (think flies and mice) and specific brain regions, progressing toward the ultimate goal of plotting the real-time activity of the neurons and networks in an entire human brain.

To many, these tools would represent an enormous step forward for the field of neuroscience, producing therapeutic, financial, and intellectual fruit as rich and plentiful as The Human Genome Project. But more scientists are balking at the prospect of a multi-billion-dollar BAM project — and its comparisons to the HGP — than you might expect.

HGP vs. BAM
Grievances thus far — many of which have been aired and compiled in this piece over at the Atlantic — have centered on two concerns. The first is money.

Already strapped for cash, some neuroscientists are worried that an undertaking as massive as a BAM project could trigger a major reorganization of existing neuro funds, diverting precious research capital away from smaller projects ruled unsympathetic to the BAM cause. Other scientists fear the worst: a wholesale redistribution of all existing biological science funding (that means fields outside of neuroscience) to make room for BAM's multi-billion-dollar pricetag.

Though Yuste admitted he has little influence over which financial route Washington will take, he said he's pulling for the third, and by far most palatable, scenario: "The human genome was sequenced with new money, and BAM should not be funded with money reapportioned from other scientific enterprises."

"Our sincere hope," added Yuste, "is that this will be new money, new funding... not for us, but for the entire field" — with the additional proviso that "every single tool or technique [developed] will be immediately released for the entire neuroscience community."

Which brings us to the second major class of objection. Even in the unlikely event Washington does inject billions of new dollars into research, many feel the BAM project's conceptual aims remain far less clear than those of the Human Genome Project.

"I think the comparison between the Human Genome Project and BAM is completely inappropriate," Princeton genomicist Leonid Kruglyak told io9, "other than the fact that they're both big projects aimed at a large biological problem." In the case of the human genome, he said, the ultimate goal was extremely clear: "the three billion base pairs of the human genome, in order, defined."

"That's a pretty clear target to shoot at," he continued, "and I think, at least, many of the applications of that were very clear. Would it be valuable to be able to record the activity of a large number of individual neurons simultaneously? Absolutely. Would that solve how the brain works? I think that's a much bigger question."

Kruglyak said one of the main arguments for launching the Human Genome Project and conducting it in a centralized fashion was that researchers were already searching for disease genes and sequencing the genome in piecemeal. It was going to get done one way or the other, but it was going to happen in thousands of little bits at a time — highly inefficient from the standpoint of time and money. Centralizing the process through the Human Genome Project dramatically reduced costs while accelerating the mapping process toward a certain goal with definitive applications. "But I think even folks who are pushing BAM would agree that, even in a fifteen year time frame, they don't see any way of doing something similar for the human brain," said Kruglyak. "It would be a nice thing to do, but it's completely in the realm of science fiction at this point."

Church and Yuste have a far easier time drawing parallels between BAM and the Human Genome Project. They cite, for example, the potential for financial return. "I would argue that this will be more economically powerful than Human Genome Project," says Yuste. Churche echoed his colleague's sentiment."We learned from the HGP experience that technology should be done as early as possible," says Church, noting that he price of genome sequencing may have been brought down a million-fold, but only after the HGP project was over. "If you mix technology development and applications from the beginning, I think you wind up with a more cost effective, and relevant project. Those are things that we're going to do differently this time around."

"Humans are nothing but our brains," Yuste said of the potential applications for technology produced in pursuit of a map of human brain activity. "Our whole culture, our personality, our minds, are a result of activity in the brain." Church tempered Yuste's holistic response with specifics. He looks forward to new medical technologies like brain-computer interfaces for cochlear, epilepsy, spinal injury and retinal implants. These devices, he argues, only stand to improve, and at an enormous benefit to the economy. "We don't have to speculate on whether there's ever going to be a market for this stuff," he asserted. "There already is."

This, of course, doesn't even begin to touch on the issues going on in Washington surrounding the upcoming sequester and what result that may have on additional federal spending, especially to the tune that the administration is considering.

Personally, I'm all for these projects. There's a lot of research currently being done in Brain Machine Interfacing, so the potential boon to that field is obvious once we know more about how the brain works, not to mention the untold impacts it could have for psychotherapy, treatment of neurological disorders and so on. But the question remains whether or not the initiative can get itself off the ground in the US.

Additionally, with similar research being funded in the EU, what are the prospects like for international collaboration between the two initiatives?

Most important of all, what do you think we can honestly expect to see come out of the results of these two major initiatives into researching the brain?

Posts

Understanding of the human brain has been advancing by leaps and bounds. There is still a huge amount we don't know about very basic functioning, and abstract questions like "how does consciousness work" can't even be defined, let alone answered. But research into electro-chemical workings of the brain, modifications via protheseses, and AI development all contribute to greater understanding. This process will continue to accelerate, regardless of these initiatives, in my mind. We already discover things far more quickly than we can understand them, so I really don't see any need at all to accelerate research in any case.

Understanding of the human brain has been advancing by leaps and bounds. There is still a huge amount we don't know about very basic functioning, and abstract questions like "how does consciousness work" can't even be defined, let alone answered. But research into electro-chemical workings of the brain, modifications via protheseses, and AI development all contribute to greater understanding. This process will continue to accelerate, regardless of these initiatives, in my mind. We already discover things far more quickly than we can understand them, so I really don't see any need at all to accelerate research in any case.

You're certainly a lot more optimistic than I am. Consider what you've written about: consciousness. Some of the most fundamental questions man has asked itself is, "Who are we? What is it that makes us, us?", are related to this concept. thousands of year of technological advancement, and we are no closer to answering these questions than we are to even being able to define the questions we're asking.

Do you know that we have only a rudimentary (and that's being generous) understanding of the mechanisms of memory in the human brain? Same for emotion?

Sure, scientists have made reasonably educated guesses at what certain, broad sections of the brain are associated with certain motor functions (ie. hearing, sight, touch, ect.). And it has allowed scientists to pull of some neat parlor tricks. But those higher functions, they’re still a mystery.

We can extract images from the visual cortex because there’s a nearly 1 to 1 mapping the retina. We even follow it through some processing the visual cortex preforms on it. But after that? We don’t know. Better yet, when I view a red car and it causes me to recall the red apple I ate this morning, how is this processed? We don’t know.

Why is field of neurology been so hard to crack? Simple: ethics!

Even for animal test subjects, probing and manipulating brains is inhumane, so you can forget about doing so to a person. That leaves scientist hampered. Not much can be gleamed from a deceased individual, nor from the non-invasive technologies we’re developed to observe the human brain. The major break though generally come from case studies of some unfortunate individual who happens to have some disease affecting their brain. For instance, Patient JW led to huge advances in our understanding of interconnection between hemispheres of the brain.

So, what will it take to advance the field? Better technology. Just take a look at one of those articles:

Creating such a map will require nothing short of a technological revolution in the field of neuroscience. ... The first step toward a BAM will be to develop tools that can actually record the individual activity of every neuron in a brain circuit. The second step will be to create tools that can influence the activity of individual neurons.

That simply isn’t technology that exists at this time. It requires innovation to make that happen. And while money is always helpful, throwing money at it is not a guarantee for success.

When people unite together, they become stronger than the sum of their parts.
Don't assume bad intentions over neglect and misunderstanding.

It's not an inherently unsolvable problem either though. There's all sorts of reasons we should be researching improved neural interfacing electrodes, not least of which is that we're starting to get good enough to want to plug prosthetics into the nervous system direct.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Realizing we've committed accidental genocide would be a hell of a thing.

Understanding of the human brain has been advancing by leaps and bounds. There is still a huge amount we don't know about very basic functioning, and abstract questions like "how does consciousness work" can't even be defined, let alone answered. But research into electro-chemical workings of the brain, modifications via protheseses, and AI development all contribute to greater understanding. This process will continue to accelerate, regardless of these initiatives, in my mind. We already discover things far more quickly than we can understand them, so I really don't see any need at all to accelerate research in any case.

You're certainly a lot more optimistic than I am. Consider what you've written about: consciousness. Some of the most fundamental questions man has asked itself is, "Who are we? What is it that makes us, us?", are related to this concept. thousands of year of technological advancement, and we are no closer to answering these questions than we are to even being able to define the questions we're asking.

Do you know that we have only a rudimentary (and that's being generous) understanding of the mechanisms of memory in the human brain? Same for emotion?

Sure, scientists have made reasonably educated guesses at what certain, broad sections of the brain are associated with certain motor functions (ie. hearing, sight, touch, ect.). And it has allowed scientists to pull of some neat parlor tricks. But those higher functions, they’re still a mystery.

We can extract images from the visual cortex because there’s a nearly 1 to 1 mapping the retina. We even follow it through some processing the visual cortex preforms on it. But after that? We don’t know. Better yet, when I view a red car and it causes me to recall the red apple I ate this morning, how is this processed? We don’t know.

Why is field of neurology been so hard to crack? Simple: ethics!

Even for animal test subjects, probing and manipulating brains is inhumane, so you can forget about doing so to a person. That leaves scientist hampered. Not much can be gleamed from a deceased individual, nor from the non-invasive technologies we’re developed to observe the human brain. The major break though generally come from case studies of some unfortunate individual who happens to have some disease affecting their brain. For instance, Patient JW led to huge advances in our understanding of interconnection between hemispheres of the brain.

So, what will it take to advance the field? Better technology. Just take a look at one of those articles:

Creating such a map will require nothing short of a technological revolution in the field of neuroscience. ... The first step toward a BAM will be to develop tools that can actually record the individual activity of every neuron in a brain circuit. The second step will be to create tools that can influence the activity of individual neurons.

That simply isn’t technology that exists at this time. It requires innovation to make that happen. And while money is always helpful, throwing money at it is not a guarantee for success.

I am indeed aware that higher brain functions are very, very poorly understood.

But, I must say I really don't mind if this sort of discovery takes some time. In my mind, the field is already moving so quickly that I'm worried about its implications, especially with regards to a mind-machine interface.

The technology already exists to take an artificial eye and hook it directly to the brain, and have it work. Its possible to move a cursor on a screen via one's mind. Prostheses are being developed that give feedback to the user. Etc.

As you point out, ethical restraints make it difficult to experiment on humans. So most developments are coming from medicine itself, for people who need prostheses that are linked to the brain, or trying to understand brain damage itself. This is fine, and is hugely beneficial to those who have lost a limb, or have suffered a brain injury.

My worry is that, in short, we all have brains, and being able to alter their functionality will have unparallelled effects on individuals and society. What happens when a prosthetic is better than the original, natural body part? Or when it becomes more efficient to interface with a computer via thought than by keyboard/mouse/voice? Or when useful information can be conveyed from one person's brain to another- telepathy by another name? These aren't going to happen tomorrow, but we've already started down the path. I think as a society we are simply not prepared. Technology is already developing so quickly in other fields that is difficult to keep up. Our lives have already been changed hugely by the comparatively simply concept of networking small computers together, to name but one recent advance. Messing with the brain will produce wondrous results, and who knows what kind of problems. Having read an awful lot of sci-fi, I've been exposed to some of these potential problems, and it worries me. Because we're not going to slow down. And if some sort of breakthrough is made, some leap in understanding that finally allows us a real understanding of a something like memory or reasoning, then we'll jump all over it and start implementing it as quickly as possible, without reflection on the consequences.

I think our understanding of intelligence and modification of the human body and mind will be the central quandaries of this century. And I think it will get messy.

Oh, crap, this is how we end up in ahistorical anarchy with wizards, demons and magical ruins, isn't it?

runes?

See, if you and been writing that with your brain, you wouldn't mistake homophones.

(edit: like technically both could work; they'd just, you know, mean radically different things.)

Tycho, I don't disagree with about any particular point. I just am a little more optimistic about society's ability to adjust, and more pessimistic about how rapidly this stuff will be brought to market. While 'messy' could be very bad indeed, particularly if religion gets involved, I'm not that worried about it.

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

All the stuff I want is not in our near future. So I have to use hormones to mod my body instead of growing a new one for transplant.

But the interface stuff is super awesome. I really want to work in that but no one doing that stuff is working at a school that wants to accept my application >: (

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

All the stuff I want is not in our near future. So I have to use hormones to mod my body instead of growing a new one for transplant.

But the interface stuff is super awesome. I really want to work in that but no one doing that stuff is working at a school that wants to accept my application >: (

I was giggling a bit about the idea of better than real prosthetic, with tactile feedback, and how much they would mean to the f-to-m transgender community. Though honestly a cybernetic cooter was my first thought.

On the technology front, I came across a thing that may be useful for imaging neurons. In 2011, they developed a solid state camera that images by using Fourier Tranformations based ion the incident angles that hit it. Just a tiny piece of silicon, no glass required:

Image quality isn't the only measure of a camera's functionality. The PFCA, developed by a Cornell Postdoc, has only a 20-pixel resolution but its size and construction will allow it to go where few cameras have been before.

The Planar Fourier Capture Array (PFCA) is constructed from a single piece of doped silicon and lack either a lens or any moving parts. It measures just 1/100th of a millimeter thick and only a half millimeter on each side—thinner than a human hair. Its dim 20-pixel-wide images are captured using advanced mathematical Fourier Transformations. Basically, the PFCA doesn't record images as a whole. Instead, each pixel records one component of the image by measuring the individual incident angles within it. This disparate data is then patched together by a computer into a unified image. "It's not going to be a camera with which people take family portraits, but there are a lot of applications out there that require just a little bit of dim vision," states Gill.

Nothing on the PFCA requires off-chip manufacturing, which results in an incredibly simple, small, and light miniature camera that costs pennies to produce. Similar-sized cameras with moving parts are more expensive by a factor of ten! This allows the camera to be, say, implanted in your skull to image neurons or used by satellites to measure the angle of the Sun or even help tiny robots to navigate a landscape.

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

All the stuff I want is not in our near future. So I have to use hormones to mod my body instead of growing a new one for transplant.

But the interface stuff is super awesome. I really want to work in that but no one doing that stuff is working at a school that wants to accept my application >: (

I was giggling a bit about the idea of better than real prosthetic, with tactile feedback, and how much they would mean to the f-to-m transgender community. Though honestly a cybernetic cooter was my first thought.

On a biology tangent, scientists have grown a rabbit penis from stem cells and successfully transplanted it with the organ proving itself to function properly. They're hoping to use the research to treat soldiers who have been victims of IEDs that have pretty much destroyed their lower body.

Given time, I imagine the same will come about for female genitalia and reproductive organs.

Oh, crap, this is how we end up in ahistorical anarchy with wizards, demons and magical ruins, isn't it?

runes?

See, if you and been writing that with your brain, you wouldn't mistake homophones.

(edit: like technically both could work; they'd just, you know, mean radically different things.)

Tycho, I don't disagree with about any particular point. I just am a little more optimistic about society's ability to adjust, and more pessimistic about how rapidly this stuff will be brought to market. While 'messy' could be very bad indeed, particularly if religion gets involved, I'm not that worried about it.

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

In lots of ways I want the technology as well. I find it super interesting, and I would, for example, upload my consciousness into a computer if it became possible to do so. Religion (and politics) getting involved is exactly what I envision. I think a huge part of politics will be stance on technology: more, or less? Especially when it comes to changing humans.

Oh, crap, this is how we end up in ahistorical anarchy with wizards, demons and magical ruins, isn't it?

runes?

See, if you and been writing that with your brain, you wouldn't mistake homophones.

(edit: like technically both could work; they'd just, you know, mean radically different things.)

Tycho, I don't disagree with about any particular point. I just am a little more optimistic about society's ability to adjust, and more pessimistic about how rapidly this stuff will be brought to market. While 'messy' could be very bad indeed, particularly if religion gets involved, I'm not that worried about it.

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

All the stuff I want is not in our near future. So I have to use hormones to mod my body instead of growing a new one for transplant.

But the interface stuff is super awesome. I really want to work in that but no one doing that stuff is working at a school that wants to accept my application >: (

I was giggling a bit about the idea of better than real prosthetic, with tactile feedback, and how much they would mean to the f-to-m transgender community. Though honestly a cybernetic cooter was my first thought.

On a biology tangent, scientists have grown a rabbit penis from stem cells and successfully transplanted it with the organ proving itself to function properly. They're hoping to use the research to treat soldiers who have been victims of IEDs that have pretty much destroyed their lower body.

Given time, I imagine the same will come about for female genitalia and reproductive organs.

They've also grown rat pancreas from stem cells in a mouse and then transplanted it back into a rat and vice versa, also done it between two types of pig. Human stem cells have a few extra hoops to jump through though compared to rodents and pigs.

Understanding of the human brain has been advancing by leaps and bounds. There is still a huge amount we don't know about very basic functioning, and abstract questions like "how does consciousness work" can't even be defined, let alone answered. But research into electro-chemical workings of the brain, modifications via protheseses, and AI development all contribute to greater understanding. This process will continue to accelerate, regardless of these initiatives, in my mind. We already discover things far more quickly than we can understand them, so I really don't see any need at all to accelerate research in any case.

You're certainly a lot more optimistic than I am. Consider what you've written about: consciousness. Some of the most fundamental questions man has asked itself is, "Who are we? What is it that makes us, us?", are related to this concept. thousands of year of technological advancement, and we are no closer to answering these questions than we are to even being able to define the questions we're asking.

Do you know that we have only a rudimentary (and that's being generous) understanding of the mechanisms of memory in the human brain? Same for emotion?

Sure, scientists have made reasonably educated guesses at what certain, broad sections of the brain are associated with certain motor functions (ie. hearing, sight, touch, ect.). And it has allowed scientists to pull of some neat parlor tricks. But those higher functions, they’re still a mystery.

We can extract images from the visual cortex because there’s a nearly 1 to 1 mapping the retina. We even follow it through some processing the visual cortex preforms on it. But after that? We don’t know. Better yet, when I view a red car and it causes me to recall the red apple I ate this morning, how is this processed? We don’t know.

Why is field of neurology been so hard to crack? Simple: ethics!

Even for animal test subjects, probing and manipulating brains is inhumane, so you can forget about doing so to a person. That leaves scientist hampered. Not much can be gleamed from a deceased individual, nor from the non-invasive technologies we’re developed to observe the human brain. The major break though generally come from case studies of some unfortunate individual who happens to have some disease affecting their brain. For instance, Patient JW led to huge advances in our understanding of interconnection between hemispheres of the brain.

So, what will it take to advance the field? Better technology. Just take a look at one of those articles:

Creating such a map will require nothing short of a technological revolution in the field of neuroscience. ... The first step toward a BAM will be to develop tools that can actually record the individual activity of every neuron in a brain circuit. The second step will be to create tools that can influence the activity of individual neurons.

That simply isn’t technology that exists at this time. It requires innovation to make that happen. And while money is always helpful, throwing money at it is not a guarantee for success.

I am indeed aware that higher brain functions are very, very poorly understood.

But, I must say I really don't mind if this sort of discovery takes some time. In my mind, the field is already moving so quickly that I'm worried about its implications, especially with regards to a mind-machine interface.

The technology already exists to take an artificial eye and hook it directly to the brain, and have it work. Its possible to move a cursor on a screen via one's mind. Prostheses are being developed that give feedback to the user. Etc.

As you point out, ethical restraints make it difficult to experiment on humans. So most developments are coming from medicine itself, for people who need prostheses that are linked to the brain, or trying to understand brain damage itself. This is fine, and is hugely beneficial to those who have lost a limb, or have suffered a brain injury.

My worry is that, in short, we all have brains, and being able to alter their functionality will have unparallelled effects on individuals and society. What happens when a prosthetic is better than the original, natural body part? Or when it becomes more efficient to interface with a computer via thought than by keyboard/mouse/voice? Or when useful information can be conveyed from one person's brain to another- telepathy by another name? These aren't going to happen tomorrow, but we've already started down the path. I think as a society we are simply not prepared. Technology is already developing so quickly in other fields that is difficult to keep up. Our lives have already been changed hugely by the comparatively simply concept of networking small computers together, to name but one recent advance. Messing with the brain will produce wondrous results, and who knows what kind of problems. Having read an awful lot of sci-fi, I've been exposed to some of these potential problems, and it worries me. Because we're not going to slow down. And if some sort of breakthrough is made, some leap in understanding that finally allows us a real understanding of a something like memory or reasoning, then we'll jump all over it and start implementing it as quickly as possible, without reflection on the consequences.

I think our understanding of intelligence and modification of the human body and mind will be the central quandaries of this century. And I think it will get messy.

The quandary is, the mechanisms of the higher level functions(consciousness, self awareness, planning, troubleshooting) are harder for us to understand and examine. But, at the same time, it could be argued that they’re far less practical. It’s why you see a lot of work a weak AI rather than hard AI. A computer that can understand natural language or pattern match a visual image are far more practical than one that is self aware.

Not to belittle the extraordinary achievements of those working in these fields because it is anything from simple, but limb replacement and artificial eyes are something that only require a trivial understanding of the brains mechanisms. It’s not concerned with how that information is integrated or what is done with it, but just getting or receiving information at natural hooks already present on the brain. Like I said, the mapping of the retina to the visual cortex is nearly 1-to-1 and a similar mapping is found on the motor cortex for movements. So, all that’s needed is stimulants or receptors at the proper hooks and you’ve achieved your goal. Think of the brain as a black box. There’s no need to be concerned with how the brain is using the data it receives or why it decides to send out the data it does, just that your prosthetic interprets it correctly.

This is good because it’s easier to generate prosthetic when you don’t have to first solve every mystery of the brain. But we’re not going to be answering question about consciousness, intelligence, and executive function as we treat it as a black box. And while they may not be the most practical questions, I still think their some of the most important.

And I think to crack those mysteries, you’ll need the technology, but you’ll also need great thinkers to be able to figure it out. I don’t think it’s something you can simply tie in to the exponential increase in technology.

When people unite together, they become stronger than the sum of their parts.
Don't assume bad intentions over neglect and misunderstanding.

On the technology front, I came across a thing that may be useful for imaging neurons. In 2011, they developed a solid state camera that images by using Fourier Tranformations based ion the incident angles that hit it. Just a tiny piece of silicon, no glass required:

Image quality isn't the only measure of a camera's functionality. The PFCA, developed by a Cornell Postdoc, has only a 20-pixel resolution but its size and construction will allow it to go where few cameras have been before.

The Planar Fourier Capture Array (PFCA) is constructed from a single piece of doped silicon and lack either a lens or any moving parts. It measures just 1/100th of a millimeter thick and only a half millimeter on each side—thinner than a human hair. Its dim 20-pixel-wide images are captured using advanced mathematical Fourier Transformations. Basically, the PFCA doesn't record images as a whole. Instead, each pixel records one component of the image by measuring the individual incident angles within it. This disparate data is then patched together by a computer into a unified image. "It's not going to be a camera with which people take family portraits, but there are a lot of applications out there that require just a little bit of dim vision," states Gill.

Nothing on the PFCA requires off-chip manufacturing, which results in an incredibly simple, small, and light miniature camera that costs pennies to produce. Similar-sized cameras with moving parts are more expensive by a factor of ten! This allows the camera to be, say, implanted in your skull to image neurons or used by satellites to measure the angle of the Sun or even help tiny robots to navigate a landscape.

Holy fucking christ this is awesome. Angle of incidence light capture is kind of a big deal because it gets you into capturing lightfields rather then images, which gives you all sorts of nifty benefits.

I'm probably a little biased, because I kinda desperately want those technologies. Like the neural interface stuff is something I'd totally dedicate a large chunk of my life to getting right. Like, I really wish had taken a different degree track, but they are going to need software and other IT folk eventually.

All the stuff I want is not in our near future. So I have to use hormones to mod my body instead of growing a new one for transplant.

But the interface stuff is super awesome. I really want to work in that but no one doing that stuff is working at a school that wants to accept my application >: (

I was giggling a bit about the idea of better than real prosthetic, with tactile feedback, and how much they would mean to the f-to-m transgender community. Though honestly a cybernetic cooter was my first thought.

On a biology tangent, scientists have grown a rabbit penis from stem cells and successfully transplanted it with the organ proving itself to function properly. They're hoping to use the research to treat soldiers who have been victims of IEDs that have pretty much destroyed their lower body.

Given time, I imagine the same will come about for female genitalia and reproductive organs.

Well, the stuff that bugs me personally is mostly bone related so research in cybergenitals isn't that relevant

But yeah, the f->m community is probably pretty close to having that breakthrough as the rabbit research suggests.

There's not nearly as much reason to grow female genitals though. You can actually make pretty awesome ones from, um, other starting materials. Reproductive stuff maybe, but you'd have to deal with different bone structure or just do every birth by C section.

Dr Miguel Nicolelis said the advance [in a neuroimplant for detecting infrared], reported in the Nature Communications journal this week, was just a prelude to a major breakthrough on a "brain-to-brain interface" which will be announced in another paper next month.

Understanding of the human brain has been advancing by leaps and bounds. There is still a huge amount we don't know about very basic functioning, and abstract questions like "how does consciousness work" can't even be defined, let alone answered. But research into electro-chemical workings of the brain, modifications via protheseses, and AI development all contribute to greater understanding. This process will continue to accelerate, regardless of these initiatives, in my mind. We already discover things far more quickly than we can understand them, so I really don't see any need at all to accelerate research in any case.

You're certainly a lot more optimistic than I am. Consider what you've written about: consciousness. Some of the most fundamental questions man has asked itself is, "Who are we? What is it that makes us, us?", are related to this concept. thousands of year of technological advancement, and we are no closer to answering these questions than we are to even being able to define the questions we're asking.

Do you know that we have only a rudimentary (and that's being generous) understanding of the mechanisms of memory in the human brain? Same for emotion?

Sure, scientists have made reasonably educated guesses at what certain, broad sections of the brain are associated with certain motor functions (ie. hearing, sight, touch, ect.). And it has allowed scientists to pull of some neat parlor tricks. But those higher functions, they’re still a mystery.

We can extract images from the visual cortex because there’s a nearly 1 to 1 mapping the retina. We even follow it through some processing the visual cortex preforms on it. But after that? We don’t know. Better yet, when I view a red car and it causes me to recall the red apple I ate this morning, how is this processed? We don’t know.

Why is field of neurology been so hard to crack? Simple: ethics!

Even for animal test subjects, probing and manipulating brains is inhumane, so you can forget about doing so to a person. That leaves scientist hampered. Not much can be gleamed from a deceased individual, nor from the non-invasive technologies we’re developed to observe the human brain. The major break though generally come from case studies of some unfortunate individual who happens to have some disease affecting their brain. For instance, Patient JW led to huge advances in our understanding of interconnection between hemispheres of the brain.

So, what will it take to advance the field? Better technology. Just take a look at one of those articles:

Creating such a map will require nothing short of a technological revolution in the field of neuroscience. ... The first step toward a BAM will be to develop tools that can actually record the individual activity of every neuron in a brain circuit. The second step will be to create tools that can influence the activity of individual neurons.

That simply isn’t technology that exists at this time. It requires innovation to make that happen. And while money is always helpful, throwing money at it is not a guarantee for success.

It amuses me that Geth agreed with this post.

Anyway, I'm excited about these efforts. I know nothing about neurology or psychology but the ways the brain works are super interesting to me. I'm sure people will disagree over the what right questions to tackle are and what best way to approach those questions is, but I'd rather scientists be arguing over the best way to use all this money than having to make do without any.

In a First, Experiment Links Brains of Two Rats
By JAMES GORMAN
Published: February 28, 2013

In an experiment that sounds straight out of a science fiction movie, a Duke neuroscientist has connected the brains of two rats in such a way that when one moves to press a lever, the other one does, too — most of the time.

The neuroscientist, Miguel Nicolelis, known for successfully demonstrating brain-machine connections, like the one in which a monkey controlled a robotic arm with its thoughts, said this was the first time one animal’s brain had been linked to another.

The question, he said, was: “Could we fool the brain? Could we make the brain process signals from another body?” The answer, he said, was yes.

He and other scientists at Duke, and in Brazil, published the results of the experiment in the journal Scientific Reports. The work received mixed reviews from other scientists, ranging from “amazing” to “very simplistic.”

Much of Dr. Nicolelis’s work is directed toward creating a full exoskeleton that a paralyzed person could operate with brain signals. Although this experiment is not directly related, he said, it helps refine the ability to read and translate brain signals, an important part of all prosthetic devices connected to the brain, and an area in which brain science is making great advances.

He also speculated about the future possibility of a biological computer, in which numerous brains are connected, and views this as a small step in that direction.

The experiment involved extensive training for both rats, with water as a reward. One, the so-called encoder rat, learned to press one of two levers, left or right, in response to a light signal over the correct lever.

The second, or decoder rat, also learned to press either the left or right lever in response to light, but then went on to respond instead to brain stimulation from his rat partner.

For the experiment, recording electrodes were implanted in the primary motor cortex of the encoder rat and stimulating electrodes in the same area in the decoder rat.

Then, as the encoder responded to the light appearing over one lever or the other, its pattern of brain activity was sent to a computer, which simplified the pattern for transmission to the decoder rat. The signal received by the decoder was not the same as the stimulation it had previously received in training, Dr. Nicolelis said.

Seven out of 10 times, the decoder rat pressed the right lever.

The researchers reported similar results in other experiments, based on whether the rats sensed a narrow or wide opening with their whiskers. In this case the electrodes were implanted in a different part of the brain, where sensory signals are received.

Ron D. Frostig, a neuroscientist at the University of California, Irvine, said, “I think it’s an amazing paper.” He described it as a “beautiful proof of principle” that information could be transferred from one brain to another in real time — not by mind-reading or telepathy, but a transfer of what might be called the impulse to act.

Andrew B. Schwartz, a neuroscientist at the University of Pittsburgh, was less impressed. He described the work as “very simplistic” and pointed out that the rat receiving the signal pushed the right lever only 7 out of 10 times and would have done so 5 out of 10 times by chance.

There was an additional twist to the research. Dr. Nicolelis added a touch of international drama by locating one rat at Duke, in North Carolina, and another in Natal, Brazil. Similarly, in his earlier work, he had a monkey in North Carolina operate a robotic arm in Japan.

The distance does not change the essential science, but adds some difficulty to the experiment, because the signals sent from one brain to the other had to go through an Internet connection.

This article has been revised to reflect the following correction:

Correction: February 28, 2013

An earlier version of this article misstated the university where Ron D. Frostig works. It is the University of California, Irvine, not the University of California, Davis.

Brain-to-brain interfaces have arrived, and they are absolutely mindblowing
Robert T. Gonzalez
In a stunning first for neuroscience, researchers have created an electronic link between the brains of two rats, and demonstrated that signals from the mind of one can help the second solve basic puzzles in real time — even when those animals are separated by thousands of miles.
Here's how it works. An "encoder" rat in Natal, Brazil, trained in a specific behavioral task, presses a lever in its cage it knows will earn it a reward. A brain implant records activity from the rat's motor cortex and converts it into an electrical signal that is delivered via neural link to the brain implant of a second "decoder" rat.

Still with us? This is where things get interesting. Rat number two is in an entirely different cage. In fact, it's in North Carolina. The second rat's motor cortex processes the signal from rat number one and — despite being unfamiliar with the behavioral task the first rat has been conditioned to perform — uses that information to press the same lever.

The experiment, the results of which are published free of charge in today's issue of Scientific Reports, was led by Duke neuroscientist Miguel Nicolelis, a pioneer in the field of brain-machine interfaces (BMIs). Back in 2011, Nicolelis and his colleagues unveiled the first such interface capable of a bi-directional link between a brain and a virtual body, allowing a monkey to not only mentally control a simulated arm, but receive and process sensory feedback about tactile properties like texture. Earlier this month, his team unveiled a BMI that enables rats to detect normally invisible infrared light via their sense of touch.

But an intercontinental mind-meld represents something new: a brain-to-brain interface between two live rats — one that enables realtime sharing of sensorimotor information. It's a scientific first, and while it's not telepathy, per se, it's certainly something close. Neither rat was necessarily aware of the other's existence, for example, but it's clear that their minds were, in fact, communicating. "It's not the Borg," Nicolelis tells Nature's Ed Yong. What he has created, he says, is "a new central nervous system made of two brains."

Said nervous system is far from perfect. Untrained decoder rats receiving input from a trained encoder partner only chose the correct lever around two-thirds of the time. That's definitely better than random odds, but still a far cry from the 95% accuracy of the encoder rats.

What this two-brain system does do, Nicolelis argues, is enable the rats to work with one another in unprecedented ways. And while neural communication between two animals on entirely separate continents is impressive in its own right*, Nicolelis says the most groundbreaking application of this technology — a 3-, 4-, or n-mind "brain net" — is still to come.

"These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains," he said in a statement, "so basically, we are creating an organic computer that solves a puzzle."

"We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net," he continues. "In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves."

The study is published in the latest issue of Scientific Reports. (No subscription required!)

Interesting research about the life span of neurons. Scientists transplanted Purkinje cells (PCs) from a short lived species of mouse (about 18-months) into longer-living rat subjects. Normally in the mice, 40% of their PCs would die off as they approached the end of their lifespan, versus a 90% survival rate for PCs in the rats.

The result? The PCs from the mice survived longer than the 18-month lifespan of the mice they were taken from.

Mouse neurons can far outlive their mouse hosts—if transplanted into longer-lived rats, according to new research published today (February 25) in Proceedings of the National Academy of Sciences. Researchers demonstrated that certain mouse neurons—which often die off well before the end of a mouse’s life—could survive twice as long if transplanted into rats, despite showing signs of decreased function. The findings suggest that neuronal survival is not pre-programmed, but strongly influenced by the brain microenvironment.

The study suggests that the “whole aging milieu of an organism” affects a cell’s function and survival, said Judith Campisi, a cell and molecular biologist at the Buck Institute for Research on Aging who was not involved in the study. “There’s nothing intrinsic in a mouse cell that says thou shalt live 18 months and then no longer exist,” she noted, adding that the data supports other studies suggesting that “non-cell autonomous” factors are important in aging.

The study on neuron lifespan arose from previous transplantation studies designed to examine both intrinsic and environmental cues neurons heed during development, said first author Lorenzo Magrassi, a neurosurgeon at the University of Pavia in Italy. The transplanted mouse neurons developed normally in young rat brains, the researchers didn’t know how long the neurons would survive in the longer-lived rats. “The question was, Are the neurons going to die when [death] is supposed to occur in the life of the mice, or do they survive?” Magrassi explained.

In order to test whether mouse neurons could outlive mice, Magrassi and his colleagues Ferdinando Rossi and Ketty Leto at the University of Turin expressed green fluorescent protein (GFP) in neuron precursors from a strain of mice that lives an average of 18 months and transplanted the cells into GFP-negative Wistar rats that can live twice as long. The mouse cells differentiated into various neuron types and integrated normally in their new environments. Magrassi’s concentrated on a specific derivative cell type, called Purkinje cells (PCs), about 40 percent of which die off in mice well before the animals succumb to old age. In contrast, Wistar rats retain about 90 percent of their PCs until death.

The researchers found that instead of dying in droves like mouse PCs, the transplanted neurons survived like rat PCs, suggesting that factors in the microenvironment of rodents’ brains are driving their survival or death. There is no “predetermined genetic clock,” said Magrassi.

In other ways, the mouse neurons retained some mouse-like characteristics—such as a smaller size—and also appeared to age as the rats aged, losing many of the protrusions seen in healthy young neurons. This makes sense, said Campisi, because the mechanisms driving aging and death are not necessarily the same.

The research suggests that changes to the neuronal environment can improve PC lifespan, Gilbert Bernier, a molecular biologist at the University of Montreal who was not involved in the research, wrote in an email to The Scientist. It’s not clear what causes mice to lose PCs in the first place, noted Bernier, who speculated that other types of neurons or immune cell-mediated inflammation could be involved. Magrassi and his collaborators are currently comparing the proteomes of host and transplanted mouse neurons to understand the differences in PC lifespan between mouse and rat.

Regardless of the mechanism, Magrassi is excited about the findings’ implications. “It means you can extend the maximum lifespan of an animal, and you don’t worry that the neurons are going to die before the death of the animal,” he said.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Realizing we've committed accidental genocide would be a hell of a thing.

This line of thinking is so silly, machines are things, no matter how complex their programming may be.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Realizing we've committed accidental genocide would be a hell of a thing.

This line of thinking is so silly, people are things, no matter how complex their thinking may be.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Realizing we've committed accidental genocide would be a hell of a thing.

This line of thinking is so silly, machines are things, no matter how complex their programming may be.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Realizing we've committed accidental genocide would be a hell of a thing.

This line of thinking is so silly, machines are things, no matter how complex their programming may be.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Realizing we've committed accidental genocide would be a hell of a thing.

This line of thinking is so silly, machines are things, no matter how complex their programming may be.

and what makes you so god damned special?

The fact that humans have no programming, obviously!

We are entirely programmed through hardware so we are far less modable than actual computers.

I'm very doubtful that we're currently capable of accidentally creating an AGI, even when emulating the human brain. However, withholding rights from something with a comparable level of intelligence would be a callous mistake.

Seriously, though, "people are fairy-dust-magic" gets in the way of science. People are boxes of wires.

It only hinders science if you're arrogant.

It's possible to believe in a soul and still support brain research. The only reason you wouldn't is if you believe in a soul/a god, and are arrogant enough to think that you're powerful enough to fuck that up via experimentation.

Seriously, though, "people are fairy-dust-magic" gets in the way of science. People are boxes of wires.

It only hinders science if you're arrogant.

It's possible to believe in a soul and still support brain research. The only reason you wouldn't is if you believe in a soul/a god, and are arrogant enough to think that you're powerful enough to fuck that up via experimentation.

I don't think it's an issue of worrying about fucking things up.

It may be an issue that some people believe brain research won't answer the questions it presumes itself capable of answering, and so the money / time could be better spent elsewhere.

Seriously, though, "people are fairy-dust-magic" gets in the way of science. People are boxes of wires.

It only hinders science if you're arrogant.

It's possible to believe in a soul and still support brain research. The only reason you wouldn't is if you believe in a soul/a god, and are arrogant enough to think that you're powerful enough to fuck that up via experimentation.

I don't think it's an issue of worrying about fucking things up.

It may be an issue that some people believe brain research won't answer the questions it presumes itself capable of answering, and so the money / time could be better spent elsewhere.

I don't think there's anyone who thinks that that doesn't already dislike research for far far stupider reasons.

Also, I hope some of that money will be allocated for developing some guidelines now for the ethical problems associated with starting up complex brain simulations. At some point, those things will be not only arguably alive, but arguably human as well.

I'd rather we know what we're doing with AI rights before we commit too many atrocities.

Realizing we've committed accidental genocide would be a hell of a thing.

This line of thinking is so silly, machines are things, no matter how complex their programming may be.

and what makes you so god damned special?

The fact that humans have no programming, obviously!

We are entirely programmed through hardware so we are far less modable than actual computers.

I don't know. We can assimilate new capabilities by interfacing with data stored in a network of peers through locally standardized communications protocols. I mean, maybe we aren't modeable, but we're very extensible.