I Sleep on Top of Myself and I Want to Know What Infinity Is are works (by Shen Shaomin) that reveal the artist’s outlook through hyperrealistic sculptures that illustrate the potential and metaphorical consequences of a world which is over-developed and despoiled.

I Sleep on Top of Myself is a series of hairless, motorized, lifelike sculptures of animals which portend a future where the earth is severely depleted of natural resources-so much so that animals begin to loose their fur. Trapped in deep sleep, these breathing creatures-a cat, a dog, a bunny, and two geese-are forced to sleep on the remnants of their fur and feathers in order to survive. As a metaphor, this quietly troubling piece asks if we humans will be forced to survive on the remnants of our past once we have exhausted all of our natural resources.

Through a similar methodology, I Want to Know What Infinity Is questions the urge to relentlessly develop and expand our economies at all costs. Similar to the way we strive for development, the old, naked, breathing hyperrealistic sculpture of a woman relentlessly strives for a perfect tan. In Shen Shaomin’s eyes, the consequences of our constant need for development will change the face of our planet, just as the undying urge for beauty transformed this old woman’s body.

The following article describes how inducing particular emotional states can alter our perception of time, a valid and potentially interesting area of study for artists as well as scientists:

“Everybody knows that the passage of time is not constant. Moments of terror or elation can stretch a clock tick to what seems like a life time. Yet, we do not know how the brain “constructs” the experience of subjective time. Would it not be important to know so we can find ways to make moments last, or pass by, more quickly?

A recent study by van Wassenhove and colleagues is beginning to shed some light on this problem. This group used a simple experimental set up to measure the “subjective” experience of time. They found that people accurately judge whether a dot appears on the screen for shorter, longer or the same amount of time as another dot. However, when the dot increases in size so as to appear to be moving toward the individual — i.e. the dot is “looming” — something strange happens. People overestimate the time that the dot lasted on the screen. This overestimation does not happen when the dot seems to move away. Thus, the overestimation is not simply a function of motion. Van Wassenhove and colleagues conducted this experiment during functional magnetic resonance imaging, which enabled them to examine how the brain reacted differently to looming and receding.

The brain imaging data revealed two main findings. First, structures in the middle of the brain were more active during the looming condition. These brain areas are also known to activate in experiments that involve the comparison of self-judgments to the judgments of others, or when an experimenter does not tell the subject what to do. In both cases, the prevailing idea is that the brain is busy wondering about itself, its ongoing plans and activities, and relating oneself to the rest of the world.

Second, brain areas including the left anterior insula were more active during the receding condition relative to the looming condition. The insula as a whole has been the focus of many recent studies and is thought to be involved in complex emotional processing. In particular, Craig has suggested that there is an emotional asymmetry, in which the left forebrain is associated with approach, safety, positive affect and the right forebrain is associated with arousal, danger, and negative affect. An object moving away might be seen as non-threatening, signaling the self to relax.

In fact, some investigators have suggested that the amount of energy spent during thinking and experiencing defines the subjective experience of duration. In other words, the more energy it takes to process a stimulus the longer it appears as a subjective experience of time. Something moving toward you has more relevance than the same stimulus moving away from you: You may need to prepare somehow; time seems to move more slowly.

The experience of time is not linear. Fear and joy stretch time as do stimuli that move towards us. What can we learn from these studies for our day-to-day experiences? When we experience something as “taking a long time” it is really the result of three inter-twined processes: the actual duration of the event, how we feel about the event, and whether we think the event is approaching us. There is little we can do about the first factor but there are obvious ways of modulating how we feel about an event and how we think about an event approaching us. Future studies will need to address the question of whether modifying these factors can alter our subjective time experience so that that we can shorten life’s painfully extended moments of boredom and extend those wonderful moments of bliss.

Have you ever seen a wad of chewing gum on the sidewalk and wondered about the person who spat it out? Artist Heather Dewey-Hagborg has done more than wonder. She collects errant hairs, cigarette butts, fingernails, and discarded chewing gum from public places and using the DNA she finds, creates 3D portraits of how the owners of this discarded genetic material might look.

Dewey-Hagborg’s Stranger Visions project is fascinating in both its concept and its limitations. She explains on her blog that she originally conceived of the project while sitting in her therapist’s office, where she noticed a hair lodged in a crack in a picture frame. She wondered about the person to whom the hair had belonged, imagining what they might look like. Tapping her inner forensic scientist, she extracts the DNA from these found items and referencing a database of regions of DNA that are known to code for certain traits (she told the Smithsonian that she has 40-50 different traits she uses, including things like the space between the eyes and propensity to be overweight), she creates a portrait using the Basel Morphable Model. Once she has a portrait, she prints it out using a 3D printer, creating an imagined version of the stranger who left behind their DNA. You can see three of those portraits above, and more on her website.

Of course, DNA can tell us only so much. Since Dewey-Hagborg doesn’t know the people behind the DNA, she can’t compare the portraits to the real human beings, although she has created her own genetic self-portrait. Even if we had a complete picture of how DNA links to facial features, these portraits wouldn’t account for the role of epigenetics and environment in how our features develop. Additionally Dewey-Hagborg has found certain limitations with the Basel Morphable Model itself; most of the people used to train the system were of European descent, which has led to some problems in creating portraits for people who are not of European descent.

Still, her endeavor may have utility beyond being an interesting art project. The Smithsonian has a fascinating profile of Dewey-Hagborg that reveals more about her process. It ends on this note: Dewey-Hagborg was recently contacted by a medical examiner on a cold case, hoping she could create a portrait of a woman whose remains have gone unidentified for 20 years. Perhaps a DNA portrait—even a highly imperfect one—could shed a little light on this mystery.

“Tupac Shakur appeared in concert at the Coachella music festival Sunday night, wowing audiences who watched his image rap with Snoop Dogg.

And now, the Wall Street Journal is reporting (with the puntastic headline “Rapper’s De-Light”) that the late rapper, despite having died in a shooting 15 years ago, may be going on tour.

The image of the rapper is not, in fact, a hologram. The 2D-image is an updated version of a stage trick that dates to the 1800s. In the old version, an actor would hide in a recess below the stage as stagehands used mirrors to project the image of a ghost.

According to a 1999 patent uncovered by the International Business Times, the trick used by the company AV Concepts employs an angled piece of glass placed on the the stage to reflect a projector image onto a screen that looks invisible to the audience.

The team pulled together Tupac’s performance by looking at old footage and creating an animation that incorporated characteristics of the late singer’s movements.

AV Concepts president Nick Smith told the Journal that the company had used the technology to digitally resurrect some deceased executives — though he gave no details on that. The patent on the technology shows an example of a presentation where the presenter is on stage with the projected image of a car.

Over at MTV, writer Gil Kaufmann questioned whether the success of the virtual Tupac would set a trend, particularly for performances including multiple artists. The potential for a surprise appearance from a beloved celebrity performer could be a draw for audiences.

But the trick could be overused, Kaufmann wrote: “For example, if Paul McCartney announced a tour with a virtual John Lennon, Beatles fans would likely see that as being in bad taste and not show up.”

Speaking to Kaufmann, Dave Brooks of the magazine Venues Today said that the trick could have gotten tired quickly even in the Coachella performance, but that the effect was impressive when used sparingly.”

Watch this video, and witness a breakthrough in the field of brain-machine interfaces. Researchers have been improving upon BrainGate — a brain-machine interface that allows users to control an external device with their minds — for years, but what you see here is the most advanced incarnation of the implant system to date. It is nothing short of remarkable.

Starting at around 3:10, you can watch Cathy Hutchinson — who has been paralyzed from the neck down for 15 years — drink her morning coffee by controlling a robotic arm using only her mind. According to research published in today’s issue of Nature, Hutchinson is one of two quadriplegic patients — both of them stroke victims — who have learned to control the device by means of the BrainGate neural implant. The New York Times reports that it’s the first published demonstration that humans with severe brain injuries can control a sophisticated prosthetic arm with such a system:

Scientists have predicted for years that this brain-computer connection would one day allow people with injuries to the brain and spinal cord to live more independent lives. Previously, researchers had shown that humans could learn to move a computer cursor with their thoughts, and that monkeys could manipulate a robotic arm.

The technology is not yet ready for use outside the lab, experts said, but the new study is an important step forward, providing dramatic evidence that brain-controlled prosthetics are within reach.

“It is a spectacular result, in many respects,” said John Kalaska, a neuroscientist at the University of Montreal who was not involved in the study, “and really the logical next step in the development of this technology. This is the kind of work that has to be done, and it’s further confirmation of the feasibility of using this kind of approach to give paralyzed people some degree of autonomy.”

What remains to be seen is how such precision will be achieved. One of the things that makes the arm and hand movements of able-bodied people so precise is their ability to actually feel objects in the real world, and sense the position of their limbs in space (a sensation known as proprioception). The interface between our brains and our limbs is therefore bi-directional, meaning we can not only reach for something with our hands, but receive sensory feedback that allows us to make necessary adjustments to our movement, giving rise to improved dexterity and more purposeful, calculated movement.

The New York Times reports that it’s the first published demonstration that humans with severe brain injuries can control a sophisticated prosthetic arm with such a system:

Scientists have predicted for years that this brain-computer connection would one day allow people with injuries to the brain and spinal cord to live more independent lives. Previously, researchers had shown that humans could learn to move a computer cursor with their thoughts, and that monkeys could manipulate a robotic arm.

The technology is not yet ready for use outside the lab, experts said, but the new study is an important step forward, providing dramatic evidence that brain-controlled prosthetics are within reach.

“It is a spectacular result, in many respects,” said John Kalaska, a neuroscientist at the University of Montreal who was not involved in the study, “and really the logical next step in the development of this technology. This is the kind of work that has to be done, and it’s further confirmation of the feasibility of using this kind of approach to give paralyzed people some degree of autonomy.”

Hutchinson’s control over the robotic arm is not perfect, but it’s damn impressive. As the video points out, the arm featured in the video is currently programmed to compensate for lurches and unexpected collisions by “entering safety mode” and ceasing movement, but future versions of the arm will presumably be capable of finer, more delicate motions.

What remains to be seen is how such precision will be achieved. One of the things that makes the arm and hand movements of able-bodied people so precise is their ability to actually feel objects in the real world, and sense the position of their limbs in space (a sensation known as proprioception). The interface between our brains and our limbs is therefore bi-directional, meaning we can not only reach for something with our hands, but receive sensory feedback that allows us to make necessary adjustments to our movement, giving rise to improved dexterity and more purposeful, calculated movement.

A bi-directional brain-machine-brain interface may sound like blue-sky technology, but it was successfully demonstrated in monkeys less than a year ago. Could the technology demonstrated in the video up top be married with that of a bi-directional brain-machine-brain interface?