"Sensor" is the device that detects external stuff that the body (probably) can't detect. "Display" is the way the device shows data to the body (not necessarily visually). This gets around the device-centric / body-centric problem that shows up when using the terms input & output. "Armature" is the physical component of the device: the belt, hat, armband, etc; distinct from the electronics.

+

"Sensor" is the device that detects external stuff that the body (probably) can't detect. Sensors can be divided into those that sense the world, and those that digitize a user's actions. The latter are more commonly called "human input devices", (HID's).

+

+

"Display" is the way the device shows data to the body (not necessarily visually). This gets around the device-centric / body-centric problem that shows up when using the terms input & output.

+

+

"Armature" is the physical component of the device: the belt, hat, armband, etc; distinct from the electronics.

==Mailing List==

==Mailing List==

Line 43:

Line 49:

*Why?

*Why?

−

**Plug-and-play(ish) interoperability of sensors and input modes. Build three sensors and three input modes, and we have 9 possible combinations, und so weiter.

+

**Plug-and-play(ish) interoperability of sensors and displays. Build three sensors and three displays, and we have 9 possible combinations, und so weiter.

**Simplify development by separating sensing and presentation design/tasks.

**Simplify development by separating sensing and presentation design/tasks.

Line 51:

Line 57:

**#Abstract away from the physical phenomena the stimuli represent and look instead at common patterns in the data streams.

**#Abstract away from the physical phenomena the stimuli represent and look instead at common patterns in the data streams.

**#???

**#???

−

**On the data presentation end:

+

**On the data display end:

−

**#Identify and prototype a number of input methods.

+

**#Identify and prototype a number of display methods.

**#For each method think of:

**#For each method think of:

−

**#*What modulations of the signal are possible; intensity, spacing, direction, timing...? Based on that, what is the theoretical bandwidth of that input method?

+

**#*What modulations of the signal are possible; intensity, spacing, direction, timing...? Based on that, what is the theoretical bandwidth of that method?

**#*How sensitive is the average person there? How much of the signal can the brain interpret meaningfully (too much noise vs. too weak of a signal)? Based on that, what is the <em>practical</em> bandwidth that we might expect?

**#*How sensitive is the average person there? How much of the signal can the brain interpret meaningfully (too much noise vs. too weak of a signal)? Based on that, what is the <em>practical</em> bandwidth that we might expect?

We want to "make the invisible visible", to bridge our senses. Many group members were inspired years ago by this awesome Wired article, which describes obtaining an unerring sense of direction via a compass belt. Is the brain really plastic enough to adapt to entirely new senses? How natural would it feel after you've fully adapted? Then what happens when you take the device off?

"Sensor" is the device that detects external stuff that the body (probably) can't detect. Sensors can be divided into those that sense the world, and those that digitize a user's actions. The latter are more commonly called "human input devices", (HID's).

"Display" is the way the device shows data to the body (not necessarily visually). This gets around the device-centric / body-centric problem that shows up when using the terms input & output.

"Armature" is the physical component of the device: the belt, hat, armband, etc; distinct from the electronics.

encompass most sensory data of the sort we are interested in researching

allow for presentation of any those data through any of a wide variety of modalities

Why?

Plug-and-play(ish) interoperability of sensors and displays. Build three sensors and three displays, and we have 9 possible combinations, und so weiter.

Simplify development by separating sensing and presentation design/tasks.

How?

On the data sensing end:

Identify a primary set of interesting stimuli, consider various encodings thereof.

Abstract away from the physical phenomena the stimuli represent and look instead at common patterns in the data streams.

???

On the data display end:

Identify and prototype a number of display methods.

For each method think of:

What modulations of the signal are possible; intensity, spacing, direction, timing...? Based on that, what is the theoretical bandwidth of that method?

How sensitive is the average person there? How much of the signal can the brain interpret meaningfully (too much noise vs. too weak of a signal)? Based on that, what is the practical bandwidth that we might expect?