In a perspective article published in the journal Nature Neuroscience, biomedical engineering professor Garrett Stanley detailed research progress toward "reading and writing the neural code." The neural code details how the brain's roughly 100...

Natural spider silk is already amazingly strong stuff, plus scientists have developed synthetic versions of the material. Now, however, Italian and British researchers have split the difference, in a manner of speaking – they've created silk that comes from spiders, but that has added man-made ingredients which give it extra strength.

Led by Prof. Nicola Pugno from Italy's University of Trento, the scientists fed "special" water to three species of spiders. What made it special? Dispersed within it were microscopic flakes of graphene, or carbon nanotubes (which are made of rolled-up sheets of graphene). Taking the form of a one-atom-thick sheet of linked carbon atoms, graphene is currently the world's strongest material.

When silk was subsequently gathered from the spiders, it was found that the graphene/nanotubes had been passed into the fibers. As a result, its tensile strength and toughness were much higher than that of regular spider silk. "We found that the strongest silk the spiders spun had a fracture strength up to 5.4 gigapascals (GPa), and a toughness modulus up to 1,570 joules per gram (J/g)," says Pugno. "Normal spider silk, by comparison, has a fracture strength of around 1.5 GPa and a toughness modulus of around 150 J/g.

August 20, 2017 marks the 40th anniversary of the launch of the the first NASA Voyager mission, which is carrying a golden record filled with messages to potential civilizations beyond our solar system. This year is also the 20th anniversary of the sci-fi film Contact that dealt with receiving radio messages from extraterrestrials. Both the record and the film were brain children of the late Carl Sagan and raise an interesting question: which approach has the greater chance of success of making contact with aliens – sending radio messages or unmanned probes?First contact with extraterrestrial civilizations has long fascinated scientists, philosophers, and writers. It's been the topic explored by serious scientific studies, crackpots, tabloids, science fiction epics, and international debates. The speculated results of the first meeting of man and alien run the entire gamut of imagination. Visits by aliens or receiving greetings from the stars has been seen as ranging from wonderfully transcendent, with the human race raised to the next step in evolutionary perfection, to us ending up as the main course on someone's dinner table.

Interesting, speculative essay on what may be waiting for us should we choose to contact alien species. Of course our television and radar signals have been leaking into space for decades, so some intelligent life could be planning to find out more about us. I recommend caution. Be careful for what you wish for.

Creating a huge global network connecting billions of individuals might be one of humanity’s greatest achievements to date, but microbes beat us to it by more than three billion years. These tiny single-celled organisms aren’t just responsible for all life on Earth. They also have their own versions of the World Wide Web and the Internet of Things. Here’s how they work.

Much like our own cells, microbes treat pieces of DNA as coded messages. These messages contain information for assembling proteins into molecular machines that can solve specific problems, such as repairing the cell. But microbes don’t just get these messages from their own DNA. They also swallow pieces of DNA from their dead relatives or exchange them with living mates.

These DNA pieces are then incorporated into their genomes, which are like computers overseeing the work of the entire protein machinery. In this way, the tiny microbe is a flexible learning machine that intelligently searches for resources in its environment. If one protein machine doesn’t work, the microbe tries another one. Trial and error solve all the problems.

But microbes are too small to act on their own. Instead, they form societies. Microbes have been living as giant colonies, containing trillions of members, from the dawn of life. These colonies have even left behind mineral structures known as stromatolites. These are microbial metropolises, frozen in time like Pompeii, that provide evidence of life from billions of years ago.

Body organs such as kidneys, livers and hearts are incredibly complex tissues. Each is made up of many different cell types, plus other components that give the organs their structure and allow them to function as we need them to.

For 3D printed organs to work, they must mimic what happens naturally – both in terms of arrangement and serving a biological need. For example, a kidney must process and excrete waste in the form of urine.

Our latest paper shows a new technique for 3D printing of cells and other biological materials as part of a single production process. It's another step towards being able to print complex, living structures.

But it's not organ transplants we see as the most important possible consequence of this work.

There is already evidence that 3D cell printing is a technology useful in drug development, something that may reduce the burden on animals for testing and bring new treatments to market more quickly and safely.

New research suggests that up to half of the matter in the Milky Way may come from galaxies far, far away. Scientists say this could mean that each of us is made, in part, from extragalactic matter.

Using supercomputer simulations, researchers found a major and unexpected new mode for how galaxies, including our own Milky Way, acquired their matter: intergalactic transfer.

The simulations show that supernova explosions eject copious amounts of gas from galaxies, which causes atoms to be transported from one galaxy to another via powerful galactic winds. Intergalactic transfer is a newly identified phenomenon, which simulations indicate will be critical for understanding how galaxies evolve.

“Given how much of the matter out of which we formed may have come from other galaxies, we could consider ourselves space travelers or extragalactic immigrants,” says Daniel Anglés-Alcázar, a postdoctoral fellow at the CIERA (Center for Interdisciplinary Exploration and Research in Astrophysics) at Northwestern University.

“It is likely that much of the Milky Way’s matter was in other galaxies before it was kicked out by a powerful wind, traveled across intergalactic space and eventually found its new home in the Milky Way,” he says.

Galaxies are far apart from each other, so even though galactic winds propagate at several hundred kilometers per second, the process occurred over several billion years.

“This study transforms our understanding of how galaxies formed from the Big Bang,” says Claude-André Faucher-Giguère, an assistant professor of physics and astronomy and coauthor of the study that appears in the Monthly Notices of the Royal Astronomical Society.

“What this new mode implies is that up to one-half of the atoms around us—including in the solar system, on Earth, and in each one of us—comes not from our own galaxy but from other galaxies, up to one million light years away.”

Faucher-Giguère and colleagues developed numerical simulations that produced realistic 3D models of galaxies, following formation from just after the Big Bang to the present day. Anglés-Alcázar then developed algorithms to mine the data and quantify how galaxies acquire matter from the universe.

Last year, extraterrestrial exploration venture Breakthrough Initiatives announced an ambitious plan to send tons of tiny spacecraft to our nearest neighboring star system, Alpha Centauri. The project, called Breakthrough Starshot, is focused on launching lightweight ‘nanocraft’ to the stars at rip-roaring speeds. Recently, the project took a big leap toward achieving its ultimate goal by successfully sending six test craft into Low Earth Orbit.

The tiny spacecraft, called “Sprites,” are just 3.5 centimeters on each side and weigh about four grams. Aerospace engineer Zac Manchester, who is leading the design on the Sprites, has been working on them for the last 10 years.

“What we’ve set out to do from the beginning is push the size limits of spacecraft,” Manchester told Gizmodo. “The question was how small can we make a satellite and still make it do something useful. One of the challenges is how can you get enough power, and given the tiny power you can harvest, how do you communicate back to Earth?”

Zac Manchester an aerospace engineer who has been working about 10 year like desing leader on the Sprits (nanocraft) said that they want get the smallest nanocraft and still make it do something useful.

Think about your favorite work of art. Why do you like it so much? What does it do for you?

Be it painting, sculpture, music, or writing, we love art not just for its beauty, but for the reactions and emotions it evokes in us. You probably feel a sort of kinship with your favorite artists even though you’ve never met them, because their work speaks to you in what feels like a unique and personal way.

How does this change when the art in question is produced by a machine and not a human? Is creativity an irreplaceable human skill, or will computers be able to learn it?

In a new video from Big Think, Andrew McAfee, associate director of MIT Sloan School of Management’s Center for Digital Business, discusses these questions and explores the concept of creative AI.

But besides wondering whether AI will ever be able to understand the human condition and reflect it back to us in a meaningful way, shouldn’t we also be wondering why—or, better yet, whether—we want it to be able to?

'Reminds me of the old Memorex audio tape commercials--"Is it real or is it Memorex?" Can Artificial Intelligence be as creative as the human spirit? An excellent question explored in this article. One way or another, singularity is coming. Your next museum or art director could be a sophisticated robot.

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

Novelists often offer deep insights into the human psyche that take psychologists years to test. In his 1864 Notes from Underground, for example, Russian novelist Fyodor Dostoyevsky observed: “Every man has reminiscences which he would not tell to everyone, but only to his friends. He has other matters in his mind which he would not reveal even to his friends, but only to himself, and that in secret. But there are other things which a man is afraid to tell even to himself, and every decent man has a number of such things stored away in his mind.”

Intuitively, the observation rings true, but is it true experimentally? Twenty years ago social psychologists Anthony Greenwald, Mahzarin Banaji and Brian Nosek developed an instrument called the Implicit Association Test (IAT) that, they claimed, can read the innermost thoughts that you are afraid to tell even yourself. And those thoughts appear to be dark and prejudiced: we favor white over black, young over old, thin over fat, straight over gay, able over disabled, and more.

I took the test myself, as can you (Google “Project Implicit”). The race task first asks you to separate black and white faces into one of two categories: White people and Black people. Simple. Next you are asked to sort a list of words (joy, terrible, love, agony, peace, horrible, wonderful, nasty, and so on) into either Good or Bad buckets. Easy. Then the words and the black and white faces appear on the screen one at a time for you to sort into either Black people/Good or White people/Bad. The word “joy,” for example, would go into the first category, whereas a white face would go into the second category. This sorting becomes noticeably slower. Finally, you are tasked with sorting the words and faces into the categories White people/Good or Black people/Bad. Distressingly, I was much quicker to associate words like joy, love and pleasure with White people/Good than I was with Black people/Good.

The test's assessment of me was not heartening: “Your data suggest a strong automatic preference for White people over Black people. Your result is described as 'automatic preference for Black people over White people' if you were faster responding when Black people and Good are assigned to the same response key than when White people and Good were classified with the same key. Your score is described as an 'automatic preference for White people over Black people' if the opposite occurred.”

A disturbing test that reveals the innermost secrets of our psyche. Perhaps, we are all racist deep inside our mind. You can take this test and find out for yourself. An adventure into our darker side.

Recall your favorite memory: the big game you won; the moment you first saw your child's face; the day you realized you had fallen in love. It's not a single memory, though, is it? Reconstructing it, you remember the smells, the colors, the funny thing some other person said, and the way it all made you feel.

Your brain's ability to collect, connect, and create mosaics from these milliseconds-long impressions is the basis of every memory. By extension, it is the basis of you. This isn't just metaphysical poetics. Every sensory experience triggers changes in the molecules of your neurons, reshaping the way they connect to one another. That means your brain is literally made of memories, and memories constantly remake your brain. This framework for memory dates back decades. And a sprawling new review published today in Neuron adds an even finer point: Memory exists because your brain’s molecules, cells, and synapses can tell time.

Defining memory is about as difficult as defining time. In general terms, memory is a change to a system that alters the way that system works in the future. "A typical memory is really just a reactivation of connections between different parts of your brain that were active at some previous time," says neuroscientist Nikolay Kukushkin, coauthor of this paper. And all animals—along with many single-celled organisms—possess some sort of ability to learn from the past.

An image and short film has been encoded in DNA, using the units of inheritance as a medium for storing information.

Using a genome editing tool known as Crispr, US scientists inserted a gif - five frames of a horse galloping - into the DNA of bacteria.

Then the team sequenced the bacterial DNA to retrieve the gif and the image, verifying that the microbes had indeed incorporated the data as intended.

The results appear in Nature journal.

For their experiments, the team from Harvard University in Cambridge, Massachusetts, used an image of a human hand and five frames of the horse Annie G captured in the late 19th Century by the British photography pioneer Eadweard Muybridge.

In order to insert this information into the genomes of bacteria, the researchers transferred the image and the movie onto nucleotides (building blocks of DNA), producing a code that related to the individual pixels of each image.

The researchers then employed the Crispr platform, in which two proteins are used to insert genetic code into the DNA of target cells - in this case, those of E.coli bacteria.

For the gif, sequences were delivered frame-by-frame over five days to the bacterial cells.

The data were spread across the genomes of multiple bacteria, rather than just one, explained co-author Seth Shipman, from Harvard University in Massachusetts.

Animals and plants are seemingly disappearing faster than at any time since the dinosaurs died out, 66m years ago. The death knell tolls for life on Earth. Rhinos will soon be gone unless we defend them, Mexico’s final few Vaquita porpoises are drowning in fishing nets, and in America, Franklin trees survive only in parks and gardens.

Yet the survivors are taking advantage of new opportunities created by humans. Many are spreading into new parts of the world, adapting to new conditions, and even evolving into new species. In some respects, diversity is actually increasing in the human epoch, the Anthropocene. It is these biological gains that I contemplate in a new book, Inheritors of the Earth: How Nature is Thriving in and Age of Extinction, in which I argue that it is no longer credible for us to take a loss-only view of the world’s biodiversity.

The beneficiaries surround us all. Glancing out of my study window, I see poppies and camomile plants sprouting in the margins of the adjacent barley field. These plants are southern European “weeds” taking advantage of a new human-created habitat. When I visit London, I see pigeons nesting on human-built cliffs (their ancestors nested on sea cliffs) and I listen out for the cries of skyscraper-dwelling peregrine falcons which hunt them.

Climate change has brought tree bumblebees from continental Europe to my Yorkshire garden in recent years. They are joined by an influx of world travellers, moved by humans as ornamental garden plants, pets, crops, and livestock, or simply by accident, before they escaped into the wild. Neither the hares nor the rabbits in my field are “native” to Britai

A fascinating look at how new species are being created thanks to the workings of human societies. New human- created habitats are providing refuge for a large variety of animal and plant life. The current extinction wave may end up creating more life than it destroys.

The rulings on online speech are coming down all over the world. Most recently, on June 30, Germany passed a law that orders social media companies operating in the country to delete hate speech within 24 hours of it being posted, or face fines of up to $57 million per instance. That came two days after a Canada Supreme Court ruling that Google must scrub search results about pirated products. And in May a court in Austria ruled that Facebook must take down specific posts that were considered hateful toward the country’s Green party leader. Each of those rulings mandated that companies remove the content not just in the countries where it was posted, but globally. Currently, in France, the country’s privacy regulator is fighting Google in the courts to get the tech giant to apply Europe’s “right to be forgotten” laws worldwide. And, around the world, dozens of similar cases are pending.

The trend of courts applying country-specific social media laws worldwide could radically change what is allowed to be on the internet, setting a troubling precedent. What happens to the global internet when countries with different cultures have sharply diverging definitions of what is acceptable online speech? What happens when one country's idea of acceptable speech clashes with another's idea of hate speech? Experts worry the biggest risk is that the whole internet will be forced to comport with the strictest legal limitations.

With more countries putting more restrictions on what can be discussed on the internet, the concept of net neutrality and freedom of speech is in deep trouble. Country specific laws will radically change the internet.

You’ve probably met people who are experts at mastering their emotions and understanding the emotions of others. When all hell breaks loose, somehow these individuals remain calm. They know what to say and do when their boss is moody or their lover is upset. It’s no wonder that emotional intelligence was heralded as the next big thing in business success, potentially more important than IQ, when Daniel Goleman’s bestselling book, Emotional Intelligence, arrived in 1995. After all, whom would you rather work with—someone who can identify and respond to your feelings, or someone who has no clue? Whom would you rather date?

The traditional foundation of emotional intelligence rests on two common-sense assumptions. The first is that it’s possible to detect the emotions of other people accurately. That is, the human face and body are said to broadcast happiness, sadness, anger, fear, and other emotions, and if you observe closely enough, you can read these emotions like words on a page. The second assumption is that emotions are automatically triggered by events in the world, and you can learn to control them through rationality. This idea is one of the most cherished beliefs in Western civilization. For example, in many legal systems, there’s a distinction between a crime of passion, where your emotions allegedly hijacked your good sense, and a premeditated crime that involved rational planning. In economics, nearly every popular model of investor behavior separates emotion and cognition.

These two core assumptions are strongly appealing and match our daily experiences. Nevertheless, neither one stands up to scientific scrutiny in the age of neuroscience. Copious research, from my lab and others, shows that faces and bodies alone do not communicate any specific emotion in any consistent manner. In addition, we now know that the brain doesn’t have separate processes for emotion and cognition, and therefore one cannot control the other. If these statements defy your common sense, I’m right there with you. But our experiences of emotion, no matter how compelling, don’t reflect the biology of what’s happening inside us. Our traditional understanding and practice of emotional intelligence badly needs a tuneup.

A Google engineer has been fired after writing a memo asserting that biological differences between men and women are responsible for the tech industry's gender gap.

"We need to stop assuming that gender gaps imply sexism," James Damore wrote in the manifesto, which was first reported by Vice's Motherboard and later released in full by Gizmodo.

The 10-page document criticises Google initiatives aimed at increasing gender and racial diversity, and argues that Google should focus more on "ideological diversity" to make conservatives more comfortable in the company's work environment.

In response, Google CEO Sundar Pichai cut his vacation short and wrote a memo criticising Damore's manifesto for advancing harmful gender stereotypes. "To suggest a group of our colleagues have traits that make them less biologically suited to that work is offensive and not OK," Pichai wrote.

Experts have been quick to cite numerous scientific meta-analyses of differences between the sexes, most of which suggest that men and women are alike in terms of personality and cognitive ability.

Here are the specific claims Damore made in his manifesto, and the real science behind them.

Sport and the arts are vital components of the UK’s national culture, but are often treated as though they are separate worlds, despite both being the responsibility of the Department for Digital, Culture, Media and Sport. It is striking how few mentions sport gets in arts strategies and vice versa. The Culture White Paper published just last year has no place for sport.

Historically, sports and arts were not always so separated. Ancient Greek culture, for example, was quite comfortable celebrating the physical and the aesthetic together. But in today’s pigeon holes, the arts are typically characterised by the aesthetic and sport by competition. Yet the aesthetic of gymnastics, ice skating or diving is clear. Equally, events like the Turner prize demonstrate that the arts are not averse to a bit of competition. And part of de Coubertin’s vision for the modern Olympic Games was to glorify beauty through involvement of the arts and the mind.

If you're a fan of potato chips, the next best thing might just be a crispy, lightweight sheet of preserved jellyfish.

Scientists have come up with a new way to prepare these animals for consumption, improving on a centuries-old technique. And they say that eating these creatures would both help us battle jellyfish blooms in certain parts of the world, and diversify our food chain.

In China, jellyfish from the Rhizostomae order have been consumed for more than 1,700 years, and you can find them in salads and soups in many Southeast Asian countries - but the practice has never really caught on in the west.

Now researchers in Denmark have come up with a new way to prepare these animals for consumption, and hope that the dried-out final product might entice appetites way beyond Asia.

Considering that the world's growing population is in urgent need of diversifying our food sources, we suppose it's worth hearing these researchers out.

Typically, a jellyfish aimed for your plate is caught fresh and immediately - while still alive - steeped in a specialised mixture of table salt and alum, a potassium-aluminium compound commonly used in leather tanning and baking powder.

Over the course of a month, the steeping process goes through multiple steps as the treatment reduces the water content of the jellyfish, preserving it and rendering it into a somewhat rubbery, chewy product.

One of the hassles involved with using sunscreen is the fact that you shouldn't just apply it once – depending on who you ask, it should be reapplied at least once every few hours. That isn't the case, however, with an experimental new coating made from DNA. It actually gets more effective the longer it's left on the skin.

Led by assistant professor of biomedical engineering Guy German, a team at New York's Binghamton University developed thin and optically transparent crystalline DNA films, then irradiated them with ultraviolet light. It was found that the more UV exposure the films received, the more their optical density increased, and the better they got at absorbing the rays.

"Ultraviolet light can actually damage DNA, and that's not good for the skin," states German. "We thought, let's flip it. What happens instead if we actually used DNA as a sacrificial layer? So instead of damaging DNA within the skin, we damage a layer on top of the skin."

If we ever needed a timely reminder that in the world of academic publishing not all scientific journals are created equal, we now have it.

To test just how low the quality bar is for exploitative predatory journals, a prominent neuroscientist has tricked four publications into accepting a totally fake paper about midi-chlorians – the entirely fictional life forms in Star Wars that make 'the force' possible.

Neuroskeptic, a working neuroscientist who anonymously blogs about science for Discover, set up the sting, submitting the nonsensical study to nine scientific journals – only to have four of them accept it.

The journals approached are among those sometimes described as predatory in science circles because they exploit researchers into paying fees to have their papers published in them.

But in this case, three of the publications just went ahead and published the fake paper straight up – clearly not having read or checked it first – even without requiring payment of a fee.

Another, the American Journal of Medical and Biological Research, also accepted the paper, but demanded a $360 fee before publishing it.

The absurd thing, as Neuroskeptic explains, is the average human being would only need about five minutes (or less) with the paper to see that it's entirely bogus and riddled with inexplicable Star Wars references.

For a start, it's written by none other than the decidedly fishy-looking Dr Lucas McGeorge and Dr Annette Kin, and while at a very quick scan it might pass for a chemistry discussion, that's only because Neuroskeptic scraped the content of the Wikipedia page on mitochondrion (real) and reworded it, changing references to midi-chlorian/midichlorian (not so real).

To further make things obvious – just in case any 'peer-reviewers' working for the publications were actually paying attention – Neuroskeptic dropped in entire passages ripped off wholesale from Star Wars, inserting them not-so-subtly into the text.

"Midichlorians-mediated oxidative stress causes cardio-myopathy in Type 2 diabetics. As more fatty acids are delivered to the heart, and into cardiomyocytes, the oxidation of fatty acids in these cells increases," the paper reads, sounding kind of legit and science-y, but then suddenly:

"Did you ever hear the tragedy of Darth Plagueis the Wise? I thought not. It is not a story the Jedi would tell you. It was a Sith legend. Darth Plagueis was a Dark Lord of the sith, so powerful and so wise he could use the Force to influence the midichloria to create life."

Apparently, it's easy to fool just about anybody these days, even 4 scientific journals. Don't believe everything you read, see, or hear. Perhaps, AI can help us differentiate between fake and real news.

One thousand years. That is the minimum length of time it would take us to get to the nearest star - Proxima Centauri - using current methods.

But since we discovered that this star houses a potentially habitable planet, scientists have been more enthusiastic about the idea of interstellar travel than ever before.

"It's tantalising," Guillem Anglada-Escude, who led the research team that discovered the planet, said in an interview with NPR.

"Now that we know the planet is there, we can be more creative. We can think about solutions - maybe to send interstellar probes or to design specific spacecraft to look for this planet."

Still, the 4.2 light-years that stretch between us and Proxima Centauri represent a daunting distance for space explorers. It may take us a while to come up with those solutions. So we asked Futurism readers when they thought the first human will leave our solar system.

Not very soon, it seems. The option that received the most votes by far was 2100 or later - this was the choice of about 35 percent of respondents.

As respondent Charles Hornbostel explained, "With human exploration of Mars expected no earlier than the 2025-30 time frame, it is reasonable to expect humans will not have reached the orbits of Neptune and Pluto by century's end, barring any breakthroughs in exotic propulsion technology."

Hornbostel is right about the many plans countries and companies alike are pursuing to put humans on Mars in the next 10 to 15 years.

In the Star Trek universe, Vulcans would sometimes bust out one of their most impressive abilities: the mind meld. In this maneuver, the Vulcan would form a mental bond with someone else, and the two would sync up to the point that they basically shared one consciousness. Researchers at the Basque Centre on Cognition, Brain, and Language (BCBL) in Spain have now shown that humans do something a bit similar – just by having a conversation.

While the team there didn't quite uncover our latent psychic abilities, they did discover that when two people hold a conversation, their brain waves synchronize.

To carry out its research, the team placed pairs of people on either side of an opaque partition and had them hold a scripted conversation. The people in the study were strangers to each other and they were all same-sex pairs. They also took turns as both the listener and the speaker.

All the participants were connected to electroencephalography (EEG) machines which monitored the electrical activity of their brains through electrodes placed on their scalps. Sure enough, once the conversation began, the researchers were able to see that the pair's brainwaves fell in synch. The effect was so pronounced, in fact, that the researchers say they can now actually tell if two people are communicating simply by looking at their EEG results.

"To be able to know if two people are talking between themselves, and even what they are talking about, based solely on their brain activity is something truly marvelous," said team member Jon Andoni Duñabeitia. "Now we can explore new applications, which are highly useful in special communicative contexts, such as in the case of people who have difficulties with communication."

Isaac Asimov's Three Laws of Robotics are versatile and simple enough that they still persist 75 years after he first coined them. But our current world, where robots and AI agents are cleaning our houses, driving our cars and working alongside us, is vastly different than even the most forward-thinking sci-fi writers could imagine. To make sure the guidelines for programming artificial intelligence cast as wide a net as possible, experts from the University of Hertfordshire have detailed a new system they call "Empowerment."

Originally created as a safety feature of the robots in Asimov's speculative stories, the Three Laws are elegant in their simplicity. 1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.

But the Hertfordshire researchers believe that these laws don't quite cover all the nuances that could arise in a robot's day-to-day life. Any guidelines for robot behavior need to be simultaneously generic enough to apply to any situation, yet well defined enough to ensure the robot always acts in the best interests of themselves and the humans around them.

Unless you're hard of hearing, or have hearing-impaired friends or relatives, you probably won't understand sign language, which is frustrating for those who rely on it to communicate. Now engineers at the University of California San Diego have developed a prototype of what they call "The Language of Glove," a Bluetooth-enabled, sensor-packed glove that reads the sign language hand gestures and translates them into text.

This isn't the first device designed to break down this particular language barrier. The 2012 Microsoft Imagine Cup was taken out by the EnableTalk gloves, which translate gestures into speech, and a London team developed a similar system a few years later called the SignLanguageGlove. Uni, meanwhile, is a tablet-like solution that uses infrared motion tracking to convert gestures to speech and text.

The Language of Glove uses a similar tracking method to the other glove-based systems. Nine stretchable sensors are attached to the knuckles of a leather athletic glove, two on each finger and one on the thumb. These are connected to a circuit board on the wrist, which generates a letter of the American Sign Language alphabet based on the position of the fingers.

Imagine a world where everyone can perfectly understand each other. Language is translated as we speak, and awkward moments of trying to be understood are a thing of the past.

This elusive idea is something that developers have been chasing for years. Free tools like Google Translate – which is used to translate over 100 billion words a day – along with other apps and hardware that claim to translate foreign languages as they are spoken are now available, but something is still missing.

Yes, you can now buy earpiece technology reminiscent of the Hitchiker’s Guide to the Galaxy babel fish – a bit of kit which claims to so a similar job to that a university-trained, professionally-experienced, multilingual translator – but it’s really not that simple.

Despite the rather interesting claim in 1958 that translation is a Roman invention, it’s likely that it has been around as long as the written word, and interpretation even longer. We have evidence of interpreters being employed by ancient civilisations. Greece and Rome were, like many areas of the ancient world, multilingual, and so needed both translators and interpreters.

The question of how one should translate is just as old. Roman poet Cicero dictated that a translation ought to be “non verbum de verbo, sed sensum exprimere de sensu” – of expressing not word for word, but sense for sense.

This brief trip into the world of theory has one simple purpose: to emphasise that translation is not just about the words, and automating the process of replacing one with another could never be a substitute for human translation. Translation is about the words’ meaning, their connotative as well as their denotative sense, and how to express that meaning in such a way that it is both readable and comprehensible.

This article reminds me of the translation "decoders" used during the long-running "Star Trek" tv and film series. Gene Rodenberry was way ahead of his time. One of these days, we'll just plug a cable into a surgically implanted brain port and use artificial intelligence/machine learning to instantly translate our words to others.

Observe the behavior of shoppers in a long supermarket line or drivers snarled in traffic, and you can quickly become disillusioned about humanity and its collective IQ. Reality TV and websites like People of Walmart inflame this consideration. Lots of songs, both popular and underground, even utter the phrase “only stupid people are breeding.” Apparently, many of us can relate.

And yet, we’re better at technology today than in times past. Never before have we been more productive, better educated, or more technologically savvy. I had a teacher in high school who said that at the time Einstein was considering relativity, few people in the entire world were intelligent enough to understand it. But just a generation later, everyone had the theory in high school and understood it well, or at least well enough to pass the test.

So at different times and in different ways, we get competing impressions as to whether humanity collectively is getting smarter or less intelligent than before. Of course, the problem with personal experience is that it’s myopic or shortsighted. So what do studies tell us? What’s really going on here? Well, things get more complex and thornier moving forward, as they often do.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.