Monday, February 06. 2017

Note: following the two previous posts about algorythms and bots ("how do they ... ?), here comes a third one.

Slighty different and not really dedicated to bots per se, but which could be considered as related to "machinic intelligence" nonetheless. This time it concerns techniques and algoritms developed to understand the brain (BRAIN initiative, or in Europe the competing Blue Brain Project).

In a funny reversal, scientists applied techniques and algorythms developed to track human intelligence patterns based on data sets to the computer itself. How do a simple chip "compute information"? And the results are surprising: the computer doesn't understand how the computer "thinks" (or rather works in this case)!

This to confirm that the brain is certainly not a computer (made out of flesh)...

When you apply tools used to analyze the human brain to a computer chip that plays Donkey Kong, can they reveal how the hardware works?

Many research schemes, such as the U.S. government’s BRAIN initiative, are seeking to build huge and detailed data sets that describe how cells and neural circuits are assembled. The hope is that using algorithms to analyze the data will help scientists understand how the brain works.

But those kind of data sets don’t yet exist. So Eric Jonas of the University of California, Berkeley, and Konrad Kording from the Rehabilitation Institute of Chicago and Northwestern University wondered if they could use their analytical software to work out how a simpler system worked.

They settled on the iconic MOS 6502 microchip, which was found inside the Apple I, the Commodore 64, and the Atari Video Game System. Unlike the brain, this slab of silicon is built by humans and fully understood, down to the last transistor.

The researchers wanted to see how accurately their software could describe its activity. Their idea: have the chip run different games—including Donkey Kong, Space Invaders, and Pitfall, which have already been mastered by some AIs—and capture the behavior of every single transistor as it did so (creating about 1.5 GB per second of data in the process). Then they would turn their analytical tools loose on the data to see if they could explain how the microchip actually works.

For instance, they used algorithms that could probe the structure of the chip—essentially the electronic equivalent of a connectome of the brain—to establish the function of each area. While the analysis could determine that different transistors played different roles, the researchers write in PLOS Computational Biology, the results “still cannot get anywhere near an understanding of the way the processor really works.”

Elsewhere, Jonas and Kording removed a transistor from the microchip to find out what happened to the game it was running—analogous to so-called lesion studies where behavior is compared before and after the removal of part of the brain. While the removal of some transistors stopped the game from running, the analysis was unable to explain why that was the case.

In these and other analyses, the approaches provided interesting results—but not enough detail to confidently describe how the microchip worked. “While some of the results give interesting hints as to what might be going on,” explains Jonas, “the gulf between what constitutes ‘real understanding’ of the processor and what we can discover with these techniques was surprising.”

It’s worth noting that chips and brains are rather different: synapses work differently from logic gates, for instance, and the brain doesn’t distinguish between software and hardware like a computer. Still, the results do, according to the researchers, highlight some considerations for establishing brain understanding from huge, detailed data sets.

First, simply amassing a handful of high-quality data sets of the brains may not be enough for us to make sense of neural processes. Second, without many detailed data sets to analyze just yet, neuroscientists ought to remain aware that their tools may provide results that don’t fully describe the brain’s function.

As for the question of whether neuroscience can explain how an Atari works? At the moment, not really.

Wednesday, July 23. 2014

What if the compass app in your phone didn’t just visually point north but actually seemed to pull your hand in that direction?

Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well.

Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration.

With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation).

The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.”

Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell.

Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.”

Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch).

The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.”

Wednesday, December 04. 2013

Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos

(…)

The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting “deep learning” systems to work.

(…)

This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.

Friday, September 20. 2013

There is a great, undiscovered potential in virtual reality development. Sure, you can create lifelike virtual worlds, but you can also make players sick. Oculus VR founder Palmer Luckey and VP of product Nate Mitchell hosted a panel at GDC Europe last week, instructing developers on how to avoid the VR development pitfalls that make players uncomfortable. It was a lovely service for VR developers, but we saw a much greater opportunity. Inadvertently, the panel explained how to make players as queasy and uncomfortable as possible.

And so, we now present the VR developer's guide to manipulating your players right down to the vestibular level. Just follow these tips and your players will be tossing their cookies in minutes.

Note: If you'd rather not make your players horribly ill and angry, just do the opposite of everything below.

Include lots of small, tight spaces

In virtual reality, small and closed-off areas truly feel small, said Luckey. "Small corridors are really claustrophobic. It's actually one of the worst things you can do for most people in VR, is to put them in a really small corridor with the walls and the ceiling closing in on them, and then tell them to move rapidly through it."

Meanwhile, open spaces are a "relief," he said, so you'll want to avoid those.

Possible applications: Air duct exploration game.

Create a user interface that neglects depth and head-tracking

Virtual reality is all about depth and immersion, said Mitchell. So, if you want to break that immersion, your ideal user interface should be as traditional and flat as possible.

For example, put targeting reticles on a 2D plane in the center of a player's field of view. Maybe set it up so the reticle floats a couple of feet away from the player's face. "That is pretty uncomfortable for most players and they'll just try to grapple with what do they converge on: That near-field reticle or that distant mech that they're trying to shoot at?" To sample this effect yourself, said Mitchell, you can hold your thumb in front of your eyes. When you focus on a distant object, your thumb will appear to split in two. Now just imagine that happening to something as vital a targeting reticle!

You might think that setting the reticle closer to the player will make things even worse, and you're right. "The sense of personal space can make people actually feel uncomfortable, like there's this TV floating righting in front of their face that they try to bat out of the way." Mitchell said a dynamic reticle that paints itself onto in-game surfaces feels much more natural, so don't do that.

You can use similar techniques to create an intrusive, annoying heads-up display. Place a traditional HUD directly in front of the player's face. Again, they'll have to deal with double vision as their eyes struggle to focus on different elements of the game. Another option, since VR has a much wider field of view than monitors, is to put your HUD elements in the far corners of the display, effectively putting it into a player's peripheral vision. "Suddenly it's too far for the player to glance at, and they actually can't see pretty effectively." What's more, when players try to turn their head to look at it, the HUD will turn with them. Your players will spin around wildly as they desperately try to look at their ammo counter.

Possible applications: Any menu or user interface from Windows 3.1.

Disable head-tracking or take control away from the player
"Simulator sickness," when players become sick in a VR game, is actually the inverse of motion sickness, said Mitchell. Motion sickness is caused by feeling motion without being able to see it ? Mitchell cited riding on a boat rocking in the ocean as an example. "There's all this motion, but visually you don't perceive that the floor, ceiling and walls are moving. And that's what that sensory disconnect ? mainly in your vestibular senses ? is what creates that conflict that makes you dizzy." Simulator sickness he said, is the opposite. "You're in an environment where you perceive there to be motion, visually, but there is no motion. You're just sitting in a chair."

If you disable head-tracking in part of your game, it artificially creates just that sort of sensory disconnect. Furthermore, if you move the camera without player input, say to display a cut-scene, it can be very disorienting. When you turn your head in VR, you expect the world to turn with you. When it doesn't, you can have an uncomfortable reaction.

Possible applications:Frequent, Unskippable Cutscenes: The Game.

Feature plenty of backwards and lateral movement
Forward movement in a VR game tends not to cause problems, but many users have trouble dealing with backwards movement, said Mitchell. "You can imagine sometimes if you sit on a train and you perceive no motion, and the train starts moving backwards very quickly, or you see another car pulling off, all of those different sensations are very similar to that discomfort that comes from moving backwards in space." Lateral movement ? i.e. sideways movement ? has a similar effect, Mitchell said. "Being able to sort of strafe on a dime doesn't always cause the most comfortable experience."

Possible applications: Backwards roller coaster simulator.

Quick changes in altitude

"Quick changes in altitude do seem to cause disorientation," said Mitchell. Exactly why that happens isn't really understood, but it seems to hold true among VR developers. This means that implementing stairs or ramps into your games can throw players for a loop ? which, remember, is exactly what we're after.Don't use closed elevators, as these prevent users from perceiving the change in altitude, and is generally much more comfortable.

Possible applications: A VR version of the last level from Ghostbusters on NES. Also: Backwards roller coaster simulator.

Don't include visual points of reference

When players look down in VR, they expect to see their character's body. Likewise, in a space combat or mech game, they expect to see the insides of the cockpit when they look around. "Having a visual identity is really crucial to VR. People don't want to look down and be a disembodied head." For the purposes of this guide, that makes a disembodied head the ideal avatar for aggravating your players.

Okay, this is probably one of the most devious ways to manipulate your players. Mitchell imagines a simulation of sitting on a beach, watching the sunset. "If you subtly tilt the horizon line very, very minimally, a couple degrees, the player will start to become dizzy and disoriented and won't know why."

Possible applications:Drunk at the Beach.

Shoot for a low frame rate, disable V-sync

"With VR, having the world tear non-stop is miserable." Enough said. Furthermore, a low frame rate can be disorienting as well. When players move their heads and the world doesn't move at the same rate of speed, its jarring to their natural senses.

Possible applications: Limitless.

In Closing

Virtual reality is still a fledgling technology and, as Luckey and Mitchell explained, there's still a long way to go before both players and developers fully understand it.There are very few points of reference, and there is no widely established design language that developers can draw from.

What Luckey and Mitchell have detailed - and what we've decided to ignore - is a basic set of guidelines on maintaining player comfort in the VR space. Fair warning though, if you really want to design a game that makes players sick, the developers of AaaaaAAaaaAAAaaAAAAaAAAAA!!! already beat you to it.

Monday, March 29. 2010

New Scientist published an interesting article this week about the influence of the body's positioning in space on one's thought processes. According to recent research, space and the body are actually much more connected to the mind than has been traditionally accepted. The article cites a study by researchers at the University of Melbourne in Parkville, Australia which found that the eye movements of 12 right handed male subjects could be used to predict the size of each in a series of numbers that the participants were asked to generate; left and downwards meant a smaller number than the previous one, while up and to the right meant a larger number. A separate study at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, asked 24 students to move marbles from a box on a higher shelf to one on a lower shelf while answering a neutral question, such as "tell me what happened yesterday". The resulted showed that the subjects were more likely to talk of positive events when moving marbles upwards, and negative events when moving them downwards.

The notion that our bodies' direct physical relationship to space can influence thoughts is exciting, and reopens arguments against the ontological distinction between mind and body that is most commonly identified with Descartes, as well as associated questions of physical determinism vs. indeterminism . Going further, I suspect that less overt interactions between the body and its surrounding environment could be also included in this discussion, such as the psychological perceptions of temperature, humidity, and other similarly invisible environmental characteristics. The New Scientist article also references a 2008 study from the Rotman School of Management in Toronto that shows that social exclusion has the effect of making people feel colder. The issue of causality abounds here: if social exclusion or inclusion affects a person's temperature perception, would variant temperatures also be able to yield varying types of associated social behavior? Could we extend this discussion to the somewhat perverse notion that a carefully controlled interior environment is actually a form of mind control? ...

A drawing from RenéDecartes' Meditations on First Philosophy illustrates his

fabric | rblg

This blog is the survey website of fabric | ch - studio for architecture, interaction and research.

We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.

Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.

This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.