The Center for Modeling, Simulation and Imaging in Medicine (CeMSIM) at Rensselaer Polytechnic Institute, Troy, NY, USA invites applications for several postdoctoral positions to work on multiple projects funded by the NIH on developing virtual surgery technology. The ideal candidate will develop the next generation surgical simulator based on advanced physics-based computational methods and robotic systems in collaboration with surgeons at Harvard Medical School. Read more on Jobs: Multiple postdoctoral positions in virtual surgery at RPI…

Traveling is a hobby of mine. Before kids, my husband and I traveled the world at any time. Now we have to wait for school vacations to take off on a new journey, which limits our winter getaways. That’s when I implemented virtual vacations.

“The world is a book and those who do not travel read only one page.” ~ St. Augustine

I’ve always felt the need to fill the gap between vacations. Since my family loves foods from around the globe, we immerse ourselves into the culture of a country weekly by eating their foods and listening to their music. For the length of a meal, we can travel to places like Greece, Thailand or Egypt. If we don’t have the music from a previous vacation, we head over to the Princeton Public Library for a CD, or in a pinch use Pandora radio. Read more on Virtual vacations…

Stories in both their telling and their hearing are central to human experience, playing an important role in how humans understand the world around them. Entertainment media and other cultural artifacts are often designed around the presentation and experience of narrative. Even in video games, which need not be narrative, the vast majority of blockbuster titles are organized around some kind of quest narrative and many have elaborate stories with significant character development. Games, interactive fiction, and other computational media allow the dynamic generation of stories through the use of planning techniques, simulation (emergent narrative), or repair techniques. These provide new opportunities, both to make the artist’s hand less evident through the use of aleatory and/or automated methods and for the audience/player to more actively participate in the creation of the narrative.

Stories have also been deeply involved in the history of artificial intelligence, with story understanding and generation being important early tasks for natural language and knowledge representation systems. And many researchers, particularly Roger Schank, have argued that stories play a central organizing role in human intelligence. This viewpoint has also seen a significant resurgence in recent years.

Chaotic Moon Labs’ Kinect-controlled skateboard was pretty awesome, but the company has managed to blow our minds with its Board of Imagination. Crave talks with one of the creators to find out how it works.

Remember the Board of Awesomeness, the Kinect-controlled motorized skateboard from CES? Well, it just got more awesome.

The creator of this high-tech board, Chaotic Moon Labs, has come up with a new version called the Board of Imagination that works by reading your brain waves. That’s right, a mind-controlled skateboard. You simply imagine where you’d like to go and how fast you want to get there, and the Board of Imagination will take care of the rest.

It’s powered by the same 800-watt electric motor and Windows 8-enabled Samsung tablet as the Board of Awesomeness, but it adds an emotive headset to read your thoughts to set the board in motion.

To wrap our heads around how it all works, Crave talked with Whurley (like Prince and Cher, it’s just Whurley), general manager of Chaotic Moon Labs, to learn more about the technology behind the Board of Imagination. Read more on A brainwave-controlled skateboard…

The international CYNETART competition is open to artists, designers and scientists who dedicate themselves in their artistic and reflective discussion; in particular, to interdisciplinary and hybrid approaches. The call for submissions accompanies the festival every two years since 1996. This competition represents some of Europe’s most prestigious prizes in the field of media art. Read more on Call: CYNETART Competition 2012…

Creating the pattern for a new dress design can be fiddly, so Amy Wibowo at the University of Tokyo, Japan, is using augmented reality to make it simpler.

Six ceiling-mounted cameras are trained on the dummy and on two tools held by the designer, one for creating surfaces and other for cutting them. The tools and the dummy both have markers, so the cameras can work out where in 3D space they are relative to each other. As the designer draws and works on and around the physical mannequin, this shows up on a virtual onscreen version. Read more on Virtual tailor’s dummy makes designing clothes easy…

User interfaces that adapt to the user’s activities, situation and knowledge promise a significantly improved user interaction. Adaptive user interfaces need to model the user to make inferences about the interaction, requiring techniques such as agent-based modelling, machine learning and data mining. This PhD will investigate users’ interactions with adaptive user interfaces and novel techniques for adaptation in user interfaces, particularly in relation to mobile devices.

Google is in the news this week not so much for their software and search offerings, but for their hardware, and whispers of an item yet to come.

According to the New York Times, Google is developing a type of Android-based glasses that will, in some way, project content immediately into the wearer’s field of vision. The glasses reportedly include the features users have come to rely on in their smartphones, like GPS, cameras, and the ability to play and record audio. The Times reports:

Several people who have seen the glasses, but who are not allowed to speak publicly about them, said that the location information was a major feature of the glasses. Through the built-in camera on the glasses, Google will be able to stream images to its rack computers and return augmented reality information to the person wearing them. For instance, a person looking at a landmark could see detailed historical information and comments about it left by friends. If facial recognition software becomes accurate enough, the glasses could remind a wearer of when and how he met the vaguely familiar person standing in front of him at a party. They might also be used for virtual reality games that use the real world as the playground.

The first International Workshop on Intelligent Multimodal Interfaces applied in skills transfer and healthcare and rehabilitation, co-located with the 8th International Conference on Intelligent Environments (IE 12) aims to bring together researches, developers and practitioners involved in the research area of machine learning, bio-signals processing, electronics, robotics, mechatronics, virtual & augmented reality, medicine and rehabilitation.

Nowadays, multimodal interfaces have changed and revolutionized the way that people communicate, interact and learn. Undoubtedly, we are living in an age where these technological improvements have increased the human potential at levels that have not been seen before. Day by day, thanks to the science and technology, we are able to see how the fiction is converted into reality and how the multimodal interfaces are naturally integrated in the current life. This strong integration between humans and technology represents an incredible opportunity to investigate the human behavior through this interaction and analyze all the potentialities and advantages when the multimodal interfaces are used in the field of skills transfer, healthcare and rehabilitation. Read more on Call: International Workshop on Intelligent Multimodal Interfaces Applied in Skills Transfer and Healthcare and Rehabilitation (IMIASH 2012)…

On St. Valentine’s Day, we want to be close to the ones we love. Researchers from Germany, France, Italy, the Netherlands and the UK are testing whether one day that special person in our life could be a robot.

Experiments have shown that children, for example, can become extremely attached to a robot playmate, but can the robot in turn can develop a bond with a human being? Could we one day expect robots to develop a behaviour that resembles human attachment? That is the question being explored in the ALIZ-E project.

Emotion is central to all interactions, including the way we interact with technology. The ALIZ-E project focuses on robot-child interaction, capitalising on children’s open and imaginative responses to artificial ‘creatures,’ where children have said they want their robot-friend to help with homework, to play or even cook.

To enable such self-sustaining and constructive interactions – ones that take place between robot and human over days and weeks, rather than just a few minutes, the ALIZ-E project is looking to implement memory systems in robots. The role of memory is crucial in human social behaviour. While social relationships happen in the ‘here and now,’ they depend also on the past because our current behaviours are influenced by previous experiences of similar situations. Read more on ALIZ-E: Giving robot companions memory to enhance the human-robot bond…