“What we observe is not nature itself, but nature expose to our method of questioning.” — Werner Heisenberg

How much can we know of the world? This, of course, is the central question for physics, and has been since the beginning not just of modern science
as we know it, but of Western philosophy.

Around 650 BCE, Thales of Miletus first speculated
on the basic material fabric of reality. The essential tension here is one of perception. To describe the world, we must see it, sense it,
and go beyond, measuring it in all its subtle details. The problem is the “all.” We humans are necessarily blind to many aspects of physical
reality, and those aspects that we do capture are necessarily colored through the lenses of our perception.

Tissue engineers create artificial organs and tissues that can be used to develop and test new drugs, repair damaged tissue and even replace entire organs in the human body. However, current fabrication methods limit their ability to produce free-form shapes and achieve high cell viability.

Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.

Primarily, video games are a form of entertainment; a game that has since evolved into cultural, artistic and narrative forms. Recently, however, they have evolved past these definitions and entered the realms of education, as learning tools, and healthcare, as a form of therapy.
Emerging research suggests that video games today have the potential to be applied in preventative and therapeutic medicine — particularly as cognitive distraction, mental health management and psychotherapy. It’s incredible to think that something that was designed as a novelty has transcended its own design to become an integral part of our everyday lives — with the further potential to heal.

A team of experts has created the first artificially intelligent simulation of our universe. The problem is, they don’t understand it. Astrophysicists at the Flatiron Institute’s Center for Computational Astrophysics in New York City created a model called the Deep Density Displacement Model (or D3M for short) The idea was that they could map out the known universe and then study how different parts of the universe interacted with each other. But the accuracy of the simulation has baffled them. The computer is able to come up with tweaks to the universe involving things like dark matter or black holes, without being given training data on what those things are to begin with.

They're coming. The technology of the future won't be shiny and chrome with blinking lights, but rather look just like us, speak with a human voice, remember our interactions, and reply with a wink and a smile. Meet the virtual beings.

To create artificial humans has been an ambition of ours since ancient times, such as in the myths of Daedalus and Pygamilion, who created statues that came to life. In modern times, our imagination moved on from fashioning people out of clay or bronze. Instead, we imagined high-tech androids, such as Data from Star Trek, or the holographic doctor from Voyager. Perhaps our creations would even surpass us, as the immortal replicants from Blade Runner who were 'more human than human.'

Since the 1990s, researchers in the social and natural sciences have used computer simulations to try to answer questions about our world: What causes war? Which political systems are the most stable? How will climate change affect global migration? The quality of these simulations is variable, since they are limited by how well modern computers can mimic the vast complexity of our world — which is to say, not very well.

But what if computers one day were to become so powerful, and these simulations so sophisticated, that each simulated “person” in the computer code were as complicated an individual as you or me, to such a degree that these people believed they were actually alive? And what if this has already happened?

Contact lenses capable of recording video and taking pictures could one day become a reality after Samsung was granted a patent in the US to develop the technology.

The lenses feature motion sensors, which means that wearers could control devices with their eye movements and potentially give commands to their devices remotely when blinking or using their peripheral vision.

The contact lenses could also beam photos and videos directly into a wearers eyes.

“If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence."

What if everything around us — the people, the stars overhead, the ground beneath our feet, even our bodies and minds — were an elaborate illusion? What if our world were simply a hyper-realistic simulation, with all of us merely characters in some kind of sophisticated video game?

This, of course, is a familiar concept from science fiction books and films, including the 1999 blockbuster movie "The Matrix." But some physicists and philosophers say it’s possible that we really do live in a simulation — even if that means casting aside what we know (or think we know) about the universe and our place in it.

“I do not believe that most advanced alien civilizations will be biological,” says Susan Schneider of the University of Connectict and the Institute for Advanced Studies at Princeton .”The most sophisticated civilizations will be postbiological, forms of artificial intelligence or alien superintelligence.” Schneider is one of the few thinkers—outside the realm of science fiction— that have considered the notion that artificial intelligence is already out there, and has been for eons.

Researchers at the University of California, San Francisco (UCSF), have successfully synthesized human thoughts into real-time speech. This paves the way for consumer devices that can respond to thoughts without the need for the user to audibly state commands.

In April, researchers at UCSF announced a ‘neural speech prosthesis’ that could produce relatively natural-sounding speech from decoded brain activity. In a study published today, they revealed that they continued that work and have successfully decoded brain activity as speech in real-time. They have been able to turn brain signals for speech into written sentences. The project aims to transform how patients with severe disabilities can communicate in the future.

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.

As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.

Do you know you are a multidimensional being? Your body isn’t big enough to contain you.

In recently published research produced by a team from the Blue Brain Project, neuroscientists applied a classic branch of math called algebraic topology in a whole new way to peer into the brain, discovering it contains groups of neurons.

Each neuron group, according to size, forms its own high-dimensional geometric object. “We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

I have studied emotional intelligence as a hobby for a long time. Until recently, I believed emotional intelligence to remain one of the core advantages of us humans after artificial intelligence has taken over all tasks requiring memorization and logic.
During the past few years, I’ve focused my studies on emotionally intelligent algorithms, as it is the business of my startup, Inbot.
The more I have researched them, the more convinced I have become that people are no longer ahead of AI at emotional intelligence.

One of the weirdest theoretical implications of quantum mechanics is that different observers can give different—though equally valid—accounts of the same sequence of events. As highlighted by physicist Carlo Rovelli in his relational quantum mechanics (RQM), this means that there should be no absolute, observer-independent physical quantities. All physical quantities—the whole physical universe—must be relative to the observer. The notion that we all share the same physical environment must, therefore, be an illusion.

No other media meets our emotional and social needs like electronic games.

have an agonizing decision to make. Should I save a governing body that has never done a thing for me? It doesn’t even contain a single person from my race. The aliens of the galactic Council decided long ago that my people should not be trusted, that we were aggressive, entitled, and short-sighted. I’m a soldier engaged in a fight to save the entire galaxy. And now the Council wants my help to destroy their assailants? My companion Ashley is against it. “You can’t sacrifice human lives to save the Council!” she yells. “What have they ever done for us?” Another companion, Garrus, rebuffs Ashley. “This is bigger than humanity!” Schadenfreude tempts me to let the patronizing Council be pulverized; a pro-human one could replace it if we survive. But I don’t want to give cynical aliens an opportunity to attribute the lowest-possible motive to humans. I want to refute the impression that we are an arrogant, upstart species out for itself. I command humanity’s space armada to target the forces gunning for the Council, no matter the cost. I feel a rush of bravery and idealism. I love playing Mass Effect.

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.

In an industry that touches so many lives, accurate terminology is essential

I've worked in UX for the better part of a decade. From now on, I plan to remove the word “user” and any associated terms—like “UX” and “user experience”—from my vocabulary. It’ll take time. I’ll start by trying to avoid using them in conversations at work. I’ll erase them from my LinkedIn profile. I’ll find new ways to describe my job when making small talk. I will experiment and look for something better.
I don’t have any strong alternatives to offer right now, but I’m confident I’ll find some. I think of it as a challenge. The U-words are everywhere in tech, but they no longer reflect my values or my approach to design and technology. I can either keep using language I disagree with, or I can begin to search for what’s next. I choose to search.

A software program called “Annie” uses machine learning to place refugees in cities where they are most likely to be welcomed and find success.

PITTSBURGH—Half a world away from the refugee camp in Uganda where he lived for a dozen years, Baudjo Njabu tells me about his first winter in the United States.

“The biggest challenge is the cold,” he said in Swahili, speaking through an interpreter. We’re sitting on dining chairs in his sparsely furnished living room. Outside, snow covers the grass on the other side of the glass patio doors that lead to the back of the townhouse he is renting in western Pittsburgh. Njabu recounts how his children missed school recently because the bus was delayed and they couldn’t bear the frigid temperatures. His daughter and two sons sit with their mother on a leather couch nearby, half-listening to his replies, distracted by their cellphones and an old Western playing on the television.

Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.

“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”

Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.

Elon Musk’s rocket company SpaceX plans to launch 60 internet-providing satellites. The plan for Starlink, as the project is called, is to put a network of nearly 12,000 internet satellites in orbit, which could move internet data about 50% faster than existing fiber-optic cables. Starlink could bring cheap, fast internet to remote areas, airplanes, ships, and cars, plus make international teleconferencing and online gaming nearly lag-free. Financial institutions would also have a lot to gain: Starlink could relay information about faraway markets significantly faster than modern technologies permit. Musk revealed a number of details about Starlink during a call with reporters on Wednesday.

The brain-computer interface lets paralyzed people type using their thoughts.

For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.

Scientists from the University of Bristol's Intangible Realities Laboratory (IRL) and ETH Zurich have used virtual reality and artificial intelligence algorithms to learn the details of chemical change.

In a cover article published today in The Journal of Physical Chemistry, researchers across the University of Bristol and ETH Zurich describe how advanced interaction and visualisation frameworks using virtual reality (VR) enable humans to train machine-learning algorithms and accelerate scientific discovery.

Cells in the body are wired like computer chips to direct signals that instruct how they function, research suggests. Unlike a fixed circuit board, however, cells can rapidly rewire their communication networks to change their behaviour. The discovery of this cell-wide web turns our understanding of how instructions spread around a cell on its head.

In a new paper in Advanced Materials, researchers at the University of Connecticut describe a new sensor embedded in a layer of silicone skin that can help burn victims “feel” again and even confer some superhuman sensory perception.

You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.

One day soon you may be filling your lungs with crisp ocean air, your arms bathed in warm light as the sun sets over softly lapping waters and you may wonder, is this real? Or are scientists projecting holograms into my brain to create a vivid sensory experience that isn’t actually happening? A group of researchers at University of California, Berkeley are in the early stages of testing their ability to create, edit and scrub sensory experiences from your brain, both real-time and stored experiences: memories.

Using light to make us see what isn’t there.

Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

In a new interview, MIT researcher Rizwan Virk told Digital Trends that, in his estimation, we’re probably living in a simulation.
“I would say it’s somewhere between 50 and 100 percent,” he told the site. “I think it’s more likely that we’re in simulation than not.”

An emerging techno-consumerism is taking aim at what makes us human: love, happiness, politics, the search for meaning and more. It amounts to the beginnings of a new kind of modernity.

The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.

Ever since Einstein posited that space and time were inextricably linked, scientists have wondered where the cosmic web called spacetime comes from.
Now, ongoing research in quantum physics may finally arrive at an explanation: A bizarre phenomenon called quantum entanglement could be the underlying basis for the four dimensions of space and time in which we all live, according to a deep dive by Knowable Magazine. In fact, in a mind-boggling twist, our reality could be a “hologram” of this quantum state.

SciFi movies like Star Wars and Avatar depict holograms that you can see from any angle, but the reality is a lot less scintillating. So far, the only true color hologram we've seen come from a tiny, complicated display created by a Korean group led by LG, while the rest are just "Pepper's Ghost" style illusions. Now, researchers from Brigham Young University (BYU) have created a true 3D hologram, or "volumetric image," to use the correct term. "We can think about this image like a 3D-printed object," said BYU assistant prof and lead author Daniel Smalley.

A breakthrough in studying light might just be the ticket to the future of quantum computing. Researchers at EPFL have found a way to determine how light behaves beyond the limitations of wavelengths, opening the door to encoding quantum data in a sci-fi style holographic light pattern. The team took advantage of the quantum nature of the interaction between electrons and light to separate beams in terms energy, not space -- that let them use light pulses to encrypt info on the electron wave and map it with a speedy electron microscope.

Existing techniques for both studying light and extracting 3D info are inherently limited by the size of wavelengths. This allows a considerably higher resolution that can even include holographic movies of fast-moving objects.

Scientists have created a living organism whose DNA is entirely human-made — perhaps a new form of life, experts said, and a milestone in the field of synthetic biology.

Researchers at the Medical Research Council Laboratory of Molecular Biology in Britain reported on Wednesday that they had rewritten the DNA of the bacteria Escherichia coli, fashioning a synthetic genome four times larger and far more complex than any previously created.

Contrary to popular belief, peace and quiet is all about the noise in your head.

One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme—it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

The first U.S. trial of CRISPR in humans has begun, NPR reported Tuesday. Two patients are currently being treated as part of a University of Pennsylvania study. Per NPR, both have difficult-to-treat forms of cancer and both have relapsed after regular treatments. As part of the trial, researchers are taking immune cells from the patients’ own bodies and editing them with CRISPR before putting them back in. The hope is that these edited cells will be better at identifying and attacking the cancer than their unaltered counterparts. According to the U.S. government clinical trial registry, the researchers are hoping to enroll 18 people in their study.

Is this life real?Philosophers and physicists say we might be living in a computer simulation, but how can we tell? And does it matter?

Our species is not going to last forever. One way or another, humanity will vanish from the Universe, but before it does, it might summon together sufficient computing power to emulate human experience, in all of its rich detail. Some philosophers and physicists have begun to wonder if we’re already there. Maybe we are in a computer simulation, and the reality we experience is just part of the program.

Imagine if you will, the possibility of future technology that can provide instant access to information and artificial intelligence simply by thinking of it; communication, education, work, privacy, security, and the world as we know it would be dramatically transformed.

An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.

In Part I of this series, Religion and the Simulation Hypothesis: Is God an AI?, we looked at the implications of the Simulation Hypothesis, the theory that we are all living inside a sophisticated video game, as a model for how many things that are religious in nature might actually be implemented using science and technology. We looked briefly at the groundbreaking film, the Matrix, and how it brought this idea forward into popular consciousness with its release 20 years ago. We also looked at some of the central tenets of the Western (or more accurately, Middle Eastern or Abrahamic) religious traditions to show how they were not only consistent with this new theory, but this theory provided a way to bridge the ever-widening gap between religion and science.
In this second part of the series, we turn to the Eastern religious traditions, Hinduism and Buddhism in particular (and some of their offshoots), and look at some of its central tenants. While we had to search for ways that simulation hypothesis might be implied in some of the core beliefs of the Western religions, the simulation hypothesis (or more specifically, the video game version of the simulation hypothesis) seem almost tailor made to fit into these traditions.

A localization phenomenon boosts the accuracy of solving quantum many-body problems with quantum computers. These problems are otherwise challenging for conventional computers. This brings such digital quantum simulation within reach using quantum devices available today.

Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.

In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.

Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.

A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris. The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.

Every December, Adam Savage—star of the TV show MythBusters—releases a video reviewing his “favorite things” from the previous year. In 2018, one of his highlights was a set of Magic Leap augmented reality goggles. After duly noting the hype and backlash that have dogged the product, Savage describes an epiphany he had while trying on the headset at home, upstairs in his office. “I turned it on and I could hear a whale,” he says, “but I couldn’t see it. I’m looking around my office for it. And then it swims by my windows—on the outside of my building! So the glasses scanned my room and it knew that my windows were portals and it rendered the whale as if it were swimming down my street. I actually got choked up.” What Savage encountered on the other side of the glasses was a glimpse of the mirrorworld.

Facebook is working on a (non-invasive) system that will let you type straight from your brain about 5x faster than you can type on your phone today. The idea is to allow people to use their thoughts to navigate intuitively through augmented reality—the neuro-driven version of the world recently described by Kevin Kelly. No typing—no speaking, even—to distract you or slow you down as you interact with digital additions to the landscape.

In October, CEO Rony Abovitz first shared the idea of the “Magicverse,” a series of digital layers that would exist in AR over the physical world.

On Saturday, the company elaborated on the concept with a blog post and new interview — and its vision of the future is one in which the line between the physical and digital realms blurs until it almost disappears.

What does it mean for humans to thrive in the age of the machine? This is the issue that London Business School professors Andrew Scott and Lynda Gratton are wrestling with in their second major exploration project.

Google was founded over two decades ago, but they released their first public set of ethical technology principles just last year. Facebook launched out of a Harvard dorm in 2004, but they formally launched an ethics program with a public investment last month. The era of tech companies moving fast and breaking things removed from public accountability is waning, if not entirely over. That’s precisely why it’s important for industry to understand–and admit in some cases–that there’s been a need for accountable, transparent, and companywide ethical practices in technology since the beginning.

When Morpheus told us our reality was fake, it sounded far-fetched. Since then, though, the idea has picked up steam. In 2001, two years after The Matrix hit theaters, Oxford University philosopher Nick Bostrom circulated the first draft of his “simulation argument,” which posits three scenarios: (1) Humanity will go extinct before creating technology powerful enough to run convincing simulations of reality; (2) humanity will live to see such technology but decide, for whatever reason, not to run any simulations; (3) humanity will create that technology and run many different simulations of its evolutionary history — in which case there would be lots of simulated realities and only one non-simulated one, so maybe it’s more likely than not that we’re living in a simulation right now. That third scenario has excited many over the years, including Elon Musk, who in 2016 put our odds of living in a non-simulated reality at “one in billions.” We called Bostrom to discuss his paper’s legacy.

I was in line for coffee at the Business Innovation Factory (BIF) Summit this past September and was starting to get jittery. I began making conversation with the guy in front of me to distract myself, and since java was on my mind I figured that was as good a topic as any. So I made a throwaway comment about how useless I was until I got my morning cup.

According to a new study, people who saw what it would be like to lose their jobs and homes using virtual reality developed longer-lasting compassion toward the homeless compared to those who explored other media versions of the VR scenario, like text.

“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” says Jeremy Bailenson, a professor of communication at Stanford University and coauthor of the paper, which appears in PLOS ONE.

An illusion that mimics near-death experiences seems to reduce people’s fear of dying.

Mel Slater at the University of Barcelona, Spain, and his team have used virtual reality headsets to create the illusion of being separate from your own body. They did this by first making 32 volunteers feel like a virtual body was their own. While wearing a headset, the body would match any real movements the volunteers made. When a virtual ball was dropped onto the foot of the virtual body, a vibration was triggered on the person’s real foot.

For six minutes, 150 miles above Kiruna, Sweden on January 23, 2017 floated the coldest known spot in the universe. As far as we know, the coldest anything in nature can be is absolute zero on the Kelvin scale, which is –459.67°F and –273.15°C. This postage-stamp-sized atom chip packed tight with thousands of rubidium-87 atoms was just a few billionths of a degree warmer than that. The atom chip was up there in low orbit to help a team of scientist study up-close some of the oddest, least-understood stuff there is: Bose-Einstein condensate (BEC). The team of German scientists was led by Dennis Becker of QUEST-Leibniz Research School, Leibniz University Hannover, Hanover, Germany.

The world’s largest neuromorphic supercomputer designed and built to work in the same way a human brain does has been fitted with its landmark one-millionth processor core and is being switched on for the first time.

The human brain may become the next frontier in hacking, cybersecurity researchers have warned in a paper outlining the vulnerabilities of neural implant technologies that can potentially expose and compromise our consciousness.

Tissue engineers create artificial organs and tissues that can be used to develop and test new drugs, repair damaged tissue and even replace entire organs in the human body. However, current fabrication methods limit their ability to produce free-form shapes and achieve high cell viability.

Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.

Contact lenses capable of recording video and taking pictures could one day become a reality after Samsung was granted a patent in the US to develop the technology.

The lenses feature motion sensors, which means that wearers could control devices with their eye movements and potentially give commands to their devices remotely when blinking or using their peripheral vision.

The contact lenses could also beam photos and videos directly into a wearers eyes.

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.

As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.

The brain-computer interface lets paralyzed people type using their thoughts.

For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.

In a new paper in Advanced Materials, researchers at the University of Connecticut describe a new sensor embedded in a layer of silicone skin that can help burn victims “feel” again and even confer some superhuman sensory perception.

One day soon you may be filling your lungs with crisp ocean air, your arms bathed in warm light as the sun sets over softly lapping waters and you may wonder, is this real? Or are scientists projecting holograms into my brain to create a vivid sensory experience that isn’t actually happening? A group of researchers at University of California, Berkeley are in the early stages of testing their ability to create, edit and scrub sensory experiences from your brain, both real-time and stored experiences: memories.

Using light to make us see what isn’t there.

Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

Imagine if you will, the possibility of future technology that can provide instant access to information and artificial intelligence simply by thinking of it; communication, education, work, privacy, security, and the world as we know it would be dramatically transformed.

An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.

Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.

The human brain may become the next frontier in hacking, cybersecurity researchers have warned in a paper outlining the vulnerabilities of neural implant technologies that can potentially expose and compromise our consciousness.

“What we observe is not nature itself, but nature expose to our method of questioning.” — Werner Heisenberg

How much can we know of the world? This, of course, is the central question for physics, and has been since the beginning not just of modern science
as we know it, but of Western philosophy.

Around 650 BCE, Thales of Miletus first speculated
on the basic material fabric of reality. The essential tension here is one of perception. To describe the world, we must see it, sense it,
and go beyond, measuring it in all its subtle details. The problem is the “all.” We humans are necessarily blind to many aspects of physical
reality, and those aspects that we do capture are necessarily colored through the lenses of our perception.

Cells in the body are wired like computer chips to direct signals that instruct how they function, research suggests. Unlike a fixed circuit board, however, cells can rapidly rewire their communication networks to change their behaviour. The discovery of this cell-wide web turns our understanding of how instructions spread around a cell on its head.

The first U.S. trial of CRISPR in humans has begun, NPR reported Tuesday. Two patients are currently being treated as part of a University of Pennsylvania study. Per NPR, both have difficult-to-treat forms of cancer and both have relapsed after regular treatments. As part of the trial, researchers are taking immune cells from the patients’ own bodies and editing them with CRISPR before putting them back in. The hope is that these edited cells will be better at identifying and attacking the cancer than their unaltered counterparts. According to the U.S. government clinical trial registry, the researchers are hoping to enroll 18 people in their study.

Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.

Tissue engineers create artificial organs and tissues that can be used to develop and test new drugs, repair damaged tissue and even replace entire organs in the human body. However, current fabrication methods limit their ability to produce free-form shapes and achieve high cell viability.

Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.

Cells in the body are wired like computer chips to direct signals that instruct how they function, research suggests. Unlike a fixed circuit board, however, cells can rapidly rewire their communication networks to change their behaviour. The discovery of this cell-wide web turns our understanding of how instructions spread around a cell on its head.

The brain-computer interface lets paralyzed people type using their thoughts.

For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.

In a new paper in Advanced Materials, researchers at the University of Connecticut describe a new sensor embedded in a layer of silicone skin that can help burn victims “feel” again and even confer some superhuman sensory perception.

One day soon you may be filling your lungs with crisp ocean air, your arms bathed in warm light as the sun sets over softly lapping waters and you may wonder, is this real? Or are scientists projecting holograms into my brain to create a vivid sensory experience that isn’t actually happening? A group of researchers at University of California, Berkeley are in the early stages of testing their ability to create, edit and scrub sensory experiences from your brain, both real-time and stored experiences: memories.

Using light to make us see what isn’t there.

Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

Scientists have created a living organism whose DNA is entirely human-made — perhaps a new form of life, experts said, and a milestone in the field of synthetic biology.

Researchers at the Medical Research Council Laboratory of Molecular Biology in Britain reported on Wednesday that they had rewritten the DNA of the bacteria Escherichia coli, fashioning a synthetic genome four times larger and far more complex than any previously created.

Imagine if you will, the possibility of future technology that can provide instant access to information and artificial intelligence simply by thinking of it; communication, education, work, privacy, security, and the world as we know it would be dramatically transformed.

An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.

Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.

Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.

In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.

I was in line for coffee at the Business Innovation Factory (BIF) Summit this past September and was starting to get jittery. I began making conversation with the guy in front of me to distract myself, and since java was on my mind I figured that was as good a topic as any. So I made a throwaway comment about how useless I was until I got my morning cup.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

An emerging techno-consumerism is taking aim at what makes us human: love, happiness, politics, the search for meaning and more. It amounts to the beginnings of a new kind of modernity.

The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.

Tissue engineers create artificial organs and tissues that can be used to develop and test new drugs, repair damaged tissue and even replace entire organs in the human body. However, current fabrication methods limit their ability to produce free-form shapes and achieve high cell viability.

Researchers at the Laboratory of Applied Photonics Devices (LAPD), in EPFL's School of Engineering, working with colleagues from Utrecht University, have come up with an optical technique that takes just a few seconds to sculpt complex tissue shapes in a biocompatible hydrogel containing stem cells. The resulting tissue can then be vascularized by adding endothelial cells.

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.

As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

Imagine if you will, the possibility of future technology that can provide instant access to information and artificial intelligence simply by thinking of it; communication, education, work, privacy, security, and the world as we know it would be dramatically transformed.

An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.

Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.

In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.

“I do not believe that most advanced alien civilizations will be biological,” says Susan Schneider of the University of Connectict and the Institute for Advanced Studies at Princeton .”The most sophisticated civilizations will be postbiological, forms of artificial intelligence or alien superintelligence.” Schneider is one of the few thinkers—outside the realm of science fiction— that have considered the notion that artificial intelligence is already out there, and has been for eons.

Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.

“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”

Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.

The first U.S. trial of CRISPR in humans has begun, NPR reported Tuesday. Two patients are currently being treated as part of a University of Pennsylvania study. Per NPR, both have difficult-to-treat forms of cancer and both have relapsed after regular treatments. As part of the trial, researchers are taking immune cells from the patients’ own bodies and editing them with CRISPR before putting them back in. The hope is that these edited cells will be better at identifying and attacking the cancer than their unaltered counterparts. According to the U.S. government clinical trial registry, the researchers are hoping to enroll 18 people in their study.

Scientists have created a living organism whose DNA is entirely human-made — perhaps a new form of life, experts said, and a milestone in the field of synthetic biology.

Researchers at the Medical Research Council Laboratory of Molecular Biology in Britain reported on Wednesday that they had rewritten the DNA of the bacteria Escherichia coli, fashioning a synthetic genome four times larger and far more complex than any previously created.

Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.

I was in line for coffee at the Business Innovation Factory (BIF) Summit this past September and was starting to get jittery. I began making conversation with the guy in front of me to distract myself, and since java was on my mind I figured that was as good a topic as any. So I made a throwaway comment about how useless I was until I got my morning cup.

I have studied emotional intelligence as a hobby for a long time. Until recently, I believed emotional intelligence to remain one of the core advantages of us humans after artificial intelligence has taken over all tasks requiring memorization and logic.
During the past few years, I’ve focused my studies on emotionally intelligent algorithms, as it is the business of my startup, Inbot.
The more I have researched them, the more convinced I have become that people are no longer ahead of AI at emotional intelligence.

Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.

“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”

Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.

You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.

A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris. The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.

According to a new study, people who saw what it would be like to lose their jobs and homes using virtual reality developed longer-lasting compassion toward the homeless compared to those who explored other media versions of the VR scenario, like text.

“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” says Jeremy Bailenson, a professor of communication at Stanford University and coauthor of the paper, which appears in PLOS ONE.

“What we observe is not nature itself, but nature expose to our method of questioning.” — Werner Heisenberg

How much can we know of the world? This, of course, is the central question for physics, and has been since the beginning not just of modern science
as we know it, but of Western philosophy.

Around 650 BCE, Thales of Miletus first speculated
on the basic material fabric of reality. The essential tension here is one of perception. To describe the world, we must see it, sense it,
and go beyond, measuring it in all its subtle details. The problem is the “all.” We humans are necessarily blind to many aspects of physical
reality, and those aspects that we do capture are necessarily colored through the lenses of our perception.

In an industry that touches so many lives, accurate terminology is essential

I've worked in UX for the better part of a decade. From now on, I plan to remove the word “user” and any associated terms—like “UX” and “user experience”—from my vocabulary. It’ll take time. I’ll start by trying to avoid using them in conversations at work. I’ll erase them from my LinkedIn profile. I’ll find new ways to describe my job when making small talk. I will experiment and look for something better.
I don’t have any strong alternatives to offer right now, but I’m confident I’ll find some. I think of it as a challenge. The U-words are everywhere in tech, but they no longer reflect my values or my approach to design and technology. I can either keep using language I disagree with, or I can begin to search for what’s next. I choose to search.

Should buddhas own smartphones and gurus use Google? Mindfulness is often taken to mean stepping out of the technological mainstream. But rejecting technology is rejecting the natural course of human evolution, according to personal transformation pioneer Deepak Chopra.

“Personally, I am a big fan of technology,” Chopra (pictured) said during an interview with Lisa Martin, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio. “If you don’t relate to technology, you will become irrelevant. That’s a Darwinian principle. Either you adapt and use it or you’re not relevant anymore.”

Chopra and Martin spoke during the Coupa Inspire event in Las Vegas, at which Chopra was a keynote speaker. They discussed the interaction between technology and consciousness.

A software program called “Annie” uses machine learning to place refugees in cities where they are most likely to be welcomed and find success.

PITTSBURGH—Half a world away from the refugee camp in Uganda where he lived for a dozen years, Baudjo Njabu tells me about his first winter in the United States.

“The biggest challenge is the cold,” he said in Swahili, speaking through an interpreter. We’re sitting on dining chairs in his sparsely furnished living room. Outside, snow covers the grass on the other side of the glass patio doors that lead to the back of the townhouse he is renting in western Pittsburgh. Njabu recounts how his children missed school recently because the bus was delayed and they couldn’t bear the frigid temperatures. His daughter and two sons sit with their mother on a leather couch nearby, half-listening to his replies, distracted by their cellphones and an old Western playing on the television.

You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.

Contrary to popular belief, peace and quiet is all about the noise in your head.

One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme—it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris. The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.

Google was founded over two decades ago, but they released their first public set of ethical technology principles just last year. Facebook launched out of a Harvard dorm in 2004, but they formally launched an ethics program with a public investment last month. The era of tech companies moving fast and breaking things removed from public accountability is waning, if not entirely over. That’s precisely why it’s important for industry to understand–and admit in some cases–that there’s been a need for accountable, transparent, and companywide ethical practices in technology since the beginning.

What does it mean for humans to thrive in the age of the machine? This is the issue that London Business School professors Andrew Scott and Lynda Gratton are wrestling with in their second major exploration project.

Elon Musk’s rocket company SpaceX plans to launch 60 internet-providing satellites. The plan for Starlink, as the project is called, is to put a network of nearly 12,000 internet satellites in orbit, which could move internet data about 50% faster than existing fiber-optic cables. Starlink could bring cheap, fast internet to remote areas, airplanes, ships, and cars, plus make international teleconferencing and online gaming nearly lag-free. Financial institutions would also have a lot to gain: Starlink could relay information about faraway markets significantly faster than modern technologies permit. Musk revealed a number of details about Starlink during a call with reporters on Wednesday.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

Imagine if you will, the possibility of future technology that can provide instant access to information and artificial intelligence simply by thinking of it; communication, education, work, privacy, security, and the world as we know it would be dramatically transformed.

An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.

Researchers at the University of California, San Francisco (UCSF), have successfully synthesized human thoughts into real-time speech. This paves the way for consumer devices that can respond to thoughts without the need for the user to audibly state commands.

In April, researchers at UCSF announced a ‘neural speech prosthesis’ that could produce relatively natural-sounding speech from decoded brain activity. In a study published today, they revealed that they continued that work and have successfully decoded brain activity as speech in real-time. They have been able to turn brain signals for speech into written sentences. The project aims to transform how patients with severe disabilities can communicate in the future.

One of the weirdest theoretical implications of quantum mechanics is that different observers can give different—though equally valid—accounts of the same sequence of events. As highlighted by physicist Carlo Rovelli in his relational quantum mechanics (RQM), this means that there should be no absolute, observer-independent physical quantities. All physical quantities—the whole physical universe—must be relative to the observer. The notion that we all share the same physical environment must, therefore, be an illusion.

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.
In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.

The brain-computer interface lets paralyzed people type using their thoughts.

For the first time, doctors are preparing to test a brain-computer interface (BCI) that can be implanted onto a human brain, no open surgery required.
The Stentrode, a neural implant that can let paralyzed people communicate, can be delivered to a patient’s brain through the jugular vein — and the company that developed it, Synchron, just got approval to begin human experimentation.
By leaving the skull sealed shut, patients could receive their neural implants without running as great a risk of seizures, strokes, or permanent neural impairments, all of which can be caused by open-brain surgery.

One day soon you may be filling your lungs with crisp ocean air, your arms bathed in warm light as the sun sets over softly lapping waters and you may wonder, is this real? Or are scientists projecting holograms into my brain to create a vivid sensory experience that isn’t actually happening? A group of researchers at University of California, Berkeley are in the early stages of testing their ability to create, edit and scrub sensory experiences from your brain, both real-time and stored experiences: memories.

Using light to make us see what isn’t there.

Different sensory experiences show up in brain imaging as patterns of neurons firing in sequence. Neuroscientists are trying to reverse-engineer experiences by stimulating the neurons to excite the same neural patterns.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

Is this life real?Philosophers and physicists say we might be living in a computer simulation, but how can we tell? And does it matter?

Our species is not going to last forever. One way or another, humanity will vanish from the Universe, but before it does, it might summon together sufficient computing power to emulate human experience, in all of its rich detail. Some philosophers and physicists have begun to wonder if we’re already there. Maybe we are in a computer simulation, and the reality we experience is just part of the program.

In a new interview, MIT researcher Rizwan Virk told Digital Trends that, in his estimation, we’re probably living in a simulation.
“I would say it’s somewhere between 50 and 100 percent,” he told the site. “I think it’s more likely that we’re in simulation than not.”

Contrary to popular belief, peace and quiet is all about the noise in your head.

One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme—it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

An emerging techno-consumerism is taking aim at what makes us human: love, happiness, politics, the search for meaning and more. It amounts to the beginnings of a new kind of modernity.

The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.

In Part I of this series, Religion and the Simulation Hypothesis: Is God an AI?, we looked at the implications of the Simulation Hypothesis, the theory that we are all living inside a sophisticated video game, as a model for how many things that are religious in nature might actually be implemented using science and technology. We looked briefly at the groundbreaking film, the Matrix, and how it brought this idea forward into popular consciousness with its release 20 years ago. We also looked at some of the central tenets of the Western (or more accurately, Middle Eastern or Abrahamic) religious traditions to show how they were not only consistent with this new theory, but this theory provided a way to bridge the ever-widening gap between religion and science.
In this second part of the series, we turn to the Eastern religious traditions, Hinduism and Buddhism in particular (and some of their offshoots), and look at some of its central tenants. While we had to search for ways that simulation hypothesis might be implied in some of the core beliefs of the Western religions, the simulation hypothesis (or more specifically, the video game version of the simulation hypothesis) seem almost tailor made to fit into these traditions.

Do you know you are a multidimensional being? Your body isn’t big enough to contain you.

In recently published research produced by a team from the Blue Brain Project, neuroscientists applied a classic branch of math called algebraic topology in a whole new way to peer into the brain, discovering it contains groups of neurons.

Each neuron group, according to size, forms its own high-dimensional geometric object. “We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

A breakthrough in studying light might just be the ticket to the future of quantum computing. Researchers at EPFL have found a way to determine how light behaves beyond the limitations of wavelengths, opening the door to encoding quantum data in a sci-fi style holographic light pattern. The team took advantage of the quantum nature of the interaction between electrons and light to separate beams in terms energy, not space -- that let them use light pulses to encrypt info on the electron wave and map it with a speedy electron microscope.

Existing techniques for both studying light and extracting 3D info are inherently limited by the size of wavelengths. This allows a considerably higher resolution that can even include holographic movies of fast-moving objects.

SciFi movies like Star Wars and Avatar depict holograms that you can see from any angle, but the reality is a lot less scintillating. So far, the only true color hologram we've seen come from a tiny, complicated display created by a Korean group led by LG, while the rest are just "Pepper's Ghost" style illusions. Now, researchers from Brigham Young University (BYU) have created a true 3D hologram, or "volumetric image," to use the correct term. "We can think about this image like a 3D-printed object," said BYU assistant prof and lead author Daniel Smalley.

When Morpheus told us our reality was fake, it sounded far-fetched. Since then, though, the idea has picked up steam. In 2001, two years after The Matrix hit theaters, Oxford University philosopher Nick Bostrom circulated the first draft of his “simulation argument,” which posits three scenarios: (1) Humanity will go extinct before creating technology powerful enough to run convincing simulations of reality; (2) humanity will live to see such technology but decide, for whatever reason, not to run any simulations; (3) humanity will create that technology and run many different simulations of its evolutionary history — in which case there would be lots of simulated realities and only one non-simulated one, so maybe it’s more likely than not that we’re living in a simulation right now. That third scenario has excited many over the years, including Elon Musk, who in 2016 put our odds of living in a non-simulated reality at “one in billions.” We called Bostrom to discuss his paper’s legacy.

In a new paper in Advanced Materials, researchers at the University of Connecticut describe a new sensor embedded in a layer of silicone skin that can help burn victims “feel” again and even confer some superhuman sensory perception.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

Imagine if you will, the possibility of future technology that can provide instant access to information and artificial intelligence simply by thinking of it; communication, education, work, privacy, security, and the world as we know it would be dramatically transformed.

An article published in Frontiers in Neuroscience predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence, and computation will lead to development of a Human Brain/Cloud Interface that will connect brain cells to vast cloud computing networks in real time within this century bringing about the internet of thought.

Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.

In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.

Do you know you are a multidimensional being? Your body isn’t big enough to contain you.

In recently published research produced by a team from the Blue Brain Project, neuroscientists applied a classic branch of math called algebraic topology in a whole new way to peer into the brain, discovering it contains groups of neurons.

Each neuron group, according to size, forms its own high-dimensional geometric object. “We found a world that we had never imagined,” says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland. “There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

One of the weirdest theoretical implications of quantum mechanics is that different observers can give different—though equally valid—accounts of the same sequence of events. As highlighted by physicist Carlo Rovelli in his relational quantum mechanics (RQM), this means that there should be no absolute, observer-independent physical quantities. All physical quantities—the whole physical universe—must be relative to the observer. The notion that we all share the same physical environment must, therefore, be an illusion.

Scientists from the University of Bristol's Intangible Realities Laboratory (IRL) and ETH Zurich have used virtual reality and artificial intelligence algorithms to learn the details of chemical change.

In a cover article published today in The Journal of Physical Chemistry, researchers across the University of Bristol and ETH Zurich describe how advanced interaction and visualisation frameworks using virtual reality (VR) enable humans to train machine-learning algorithms and accelerate scientific discovery.

Cells in the body are wired like computer chips to direct signals that instruct how they function, research suggests. Unlike a fixed circuit board, however, cells can rapidly rewire their communication networks to change their behaviour. The discovery of this cell-wide web turns our understanding of how instructions spread around a cell on its head.

A breakthrough in studying light might just be the ticket to the future of quantum computing. Researchers at EPFL have found a way to determine how light behaves beyond the limitations of wavelengths, opening the door to encoding quantum data in a sci-fi style holographic light pattern. The team took advantage of the quantum nature of the interaction between electrons and light to separate beams in terms energy, not space -- that let them use light pulses to encrypt info on the electron wave and map it with a speedy electron microscope.

Existing techniques for both studying light and extracting 3D info are inherently limited by the size of wavelengths. This allows a considerably higher resolution that can even include holographic movies of fast-moving objects.

A localization phenomenon boosts the accuracy of solving quantum many-body problems with quantum computers. These problems are otherwise challenging for conventional computers. This brings such digital quantum simulation within reach using quantum devices available today.

For six minutes, 150 miles above Kiruna, Sweden on January 23, 2017 floated the coldest known spot in the universe. As far as we know, the coldest anything in nature can be is absolute zero on the Kelvin scale, which is –459.67°F and –273.15°C. This postage-stamp-sized atom chip packed tight with thousands of rubidium-87 atoms was just a few billionths of a degree warmer than that. The atom chip was up there in low orbit to help a team of scientist study up-close some of the oddest, least-understood stuff there is: Bose-Einstein condensate (BEC). The team of German scientists was led by Dennis Becker of QUEST-Leibniz Research School, Leibniz University Hannover, Hanover, Germany.

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.

As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.

Imagine the world of science fantasy movies about communicating directly from and to the brain become a reality. This is the prediction of UC Berkeley and the US Institute for Molecular Manufacturing.

The team of scientists published their prediction in Frontiers in Neuroscience regarding the development of a "Human Brain/Cloud Interface" (B/CI) that "connects brain cells to vast cloud-computing networks in real time," as reported by Medical Xpress.

Researchers have been making massive ‘jaw-dropping’ strides in robotics lately. We’re already aware of Sophia, the ‘almost human’ robot created by former Disney Imagineer David Hanson, that can inspire feelings of love among humans. Now, scientists at Cornell University have come out with a new ‘lifelike’ material that can move and eat on its own. What’s even more mind-boggling is that this material can also die and decay, just like living beings.

Humanity could be on the verge of an unprecedented merging of human biology with advanced technology, fusing our thoughts and knowledge directly with the cloud in real-time – and this incredible turning point may be just decades away, scientists say.

In a new research paper exploring what they call the 'human brain/cloud interface', scientists explain the technological underpinnings of what such a future system might be, and also address the barriers we'll need to address before this sci-fi dream becomes reality.

The human brain may become the next frontier in hacking, cybersecurity researchers have warned in a paper outlining the vulnerabilities of neural implant technologies that can potentially expose and compromise our consciousness.

Since the 1990s, researchers in the social and natural sciences have used computer simulations to try to answer questions about our world: What causes war? Which political systems are the most stable? How will climate change affect global migration? The quality of these simulations is variable, since they are limited by how well modern computers can mimic the vast complexity of our world — which is to say, not very well.

But what if computers one day were to become so powerful, and these simulations so sophisticated, that each simulated “person” in the computer code were as complicated an individual as you or me, to such a degree that these people believed they were actually alive? And what if this has already happened?

Primarily, video games are a form of entertainment; a game that has since evolved into cultural, artistic and narrative forms. Recently, however, they have evolved past these definitions and entered the realms of education, as learning tools, and healthcare, as a form of therapy.
Emerging research suggests that video games today have the potential to be applied in preventative and therapeutic medicine — particularly as cognitive distraction, mental health management and psychotherapy. It’s incredible to think that something that was designed as a novelty has transcended its own design to become an integral part of our everyday lives — with the further potential to heal.

They're coming. The technology of the future won't be shiny and chrome with blinking lights, but rather look just like us, speak with a human voice, remember our interactions, and reply with a wink and a smile. Meet the virtual beings.

To create artificial humans has been an ambition of ours since ancient times, such as in the myths of Daedalus and Pygamilion, who created statues that came to life. In modern times, our imagination moved on from fashioning people out of clay or bronze. Instead, we imagined high-tech androids, such as Data from Star Trek, or the holographic doctor from Voyager. Perhaps our creations would even surpass us, as the immortal replicants from Blade Runner who were 'more human than human.'

“If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence."

What if everything around us — the people, the stars overhead, the ground beneath our feet, even our bodies and minds — were an elaborate illusion? What if our world were simply a hyper-realistic simulation, with all of us merely characters in some kind of sophisticated video game?

This, of course, is a familiar concept from science fiction books and films, including the 1999 blockbuster movie "The Matrix." But some physicists and philosophers say it’s possible that we really do live in a simulation — even if that means casting aside what we know (or think we know) about the universe and our place in it.

Scientists from the University of Bristol's Intangible Realities Laboratory (IRL) and ETH Zurich have used virtual reality and artificial intelligence algorithms to learn the details of chemical change.

In a cover article published today in The Journal of Physical Chemistry, researchers across the University of Bristol and ETH Zurich describe how advanced interaction and visualisation frameworks using virtual reality (VR) enable humans to train machine-learning algorithms and accelerate scientific discovery.

You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. Almost everyone is asleep. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away. You jolt, fearing for your safety, but then remind yourself there’s nothing to worry about. You take off the Oculus helmet and find yourself back in the real world, in Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford University.

Is this life real?Philosophers and physicists say we might be living in a computer simulation, but how can we tell? And does it matter?

Our species is not going to last forever. One way or another, humanity will vanish from the Universe, but before it does, it might summon together sufficient computing power to emulate human experience, in all of its rich detail. Some philosophers and physicists have begun to wonder if we’re already there. Maybe we are in a computer simulation, and the reality we experience is just part of the program.

In a new interview, MIT researcher Rizwan Virk told Digital Trends that, in his estimation, we’re probably living in a simulation.
“I would say it’s somewhere between 50 and 100 percent,” he told the site. “I think it’s more likely that we’re in simulation than not.”

SciFi movies like Star Wars and Avatar depict holograms that you can see from any angle, but the reality is a lot less scintillating. So far, the only true color hologram we've seen come from a tiny, complicated display created by a Korean group led by LG, while the rest are just "Pepper's Ghost" style illusions. Now, researchers from Brigham Young University (BYU) have created a true 3D hologram, or "volumetric image," to use the correct term. "We can think about this image like a 3D-printed object," said BYU assistant prof and lead author Daniel Smalley.

An emerging techno-consumerism is taking aim at what makes us human: love, happiness, politics, the search for meaning and more. It amounts to the beginnings of a new kind of modernity.

The founders of a new, AI-fuelled chatbot want it to become your best friend and most perceptive counsellor. An intelligent robot pet promises to assuage chronic loneliness among the elderly. The creators of an immersive virtual world — meant to be populated by thousands or even millions of users — say it will generate new insight into the nature of justice and democracy.
Three seemingly unrelated snapshots of these dizzying, accelerated times. But look closer and they all point towards the beginnings of a profound shift in our relationship to technology. How we use it and relate to it. What we think, ultimately, it is for.
This shift amounts to the emergence of a new kind of modern experience; a new kind of modernity. Let’s assign this emerging moment a name — augmented modernity.

When Morpheus told us our reality was fake, it sounded far-fetched. Since then, though, the idea has picked up steam. In 2001, two years after The Matrix hit theaters, Oxford University philosopher Nick Bostrom circulated the first draft of his “simulation argument,” which posits three scenarios: (1) Humanity will go extinct before creating technology powerful enough to run convincing simulations of reality; (2) humanity will live to see such technology but decide, for whatever reason, not to run any simulations; (3) humanity will create that technology and run many different simulations of its evolutionary history — in which case there would be lots of simulated realities and only one non-simulated one, so maybe it’s more likely than not that we’re living in a simulation right now. That third scenario has excited many over the years, including Elon Musk, who in 2016 put our odds of living in a non-simulated reality at “one in billions.” We called Bostrom to discuss his paper’s legacy.

An illusion that mimics near-death experiences seems to reduce people’s fear of dying.

Mel Slater at the University of Barcelona, Spain, and his team have used virtual reality headsets to create the illusion of being separate from your own body. They did this by first making 32 volunteers feel like a virtual body was their own. While wearing a headset, the body would match any real movements the volunteers made. When a virtual ball was dropped onto the foot of the virtual body, a vibration was triggered on the person’s real foot.

According to a new study, people who saw what it would be like to lose their jobs and homes using virtual reality developed longer-lasting compassion toward the homeless compared to those who explored other media versions of the VR scenario, like text.

“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” says Jeremy Bailenson, a professor of communication at Stanford University and coauthor of the paper, which appears in PLOS ONE.