Augmented reality (AR) is an emerging technology that enhances, or augments, a user’s perception of the real world. The enhancement can be delivered through sight, sound, or touch and provides additional information about a person’s environment.

Beyond consumer and entertainment applications, AR is highly useful for improving efficiency, safety, and productivity in the workplace by providing a user with important information (i.e. sensor data, inventory information, heat mapping) that is not naturally perceivable from the environment. Many industries including construction, medical, manufacturing, and defense sectors have begun to invest and develop AR technologies that enhance a user’s ability to complete a task. Recent partnerships between HTC Vive and AECOM, developments by the US military, and projects supported by Medtronic illustrate how AR heads-up displays offer a powerful method for conveying information needed to plan a construction site, learn a medical procedure, or successfully execute a military training exercise.

In addition to these on-Earth settings, AR has similar applications for improving operational tasks during space exploration. Astronauts stationed on the International Space Station (ISS) have already experimented using Microsoft’s Hololens to complete tasks.
Feedback from astronauts that tested the Hololens and NASA’s roadmap plans indicate that AR could be used to (1) superimpose instructions or illustrations that guide an astronaut through maintenance repairs, and (2) enable remote visibility between an astronaut and ground operator to solve an issue together in real-time. The Hololens is also being used to plan the next Mars mission. Additional projects such as AMARIS and open calls for AR solutions highlight NASA’s interest in developing AR technologies and AR’s growing ability to support unique conditions in aeronautic and space missions.

2018 is off with a bang, and so are we! We spent this past week in Las Vegas demo-ing our technology at CES and had a great time talking with fellow entrepreneurs and technologists. Here’s a few key takeaways from our CES experience.

Great Startups Start Anywhere

During CES, Shantanu spoke on a Techstars panel themed “Great Startups Start Anywhere” to share the benefits and reasons of why companies decide to start outside of Silicon Valley. In addition to representing our home state, Shantanu highlighted the importance of the networks and resources we have in Arizona, and how Phoenix is a becoming start-up city. The diversity in company headquarters throughout Eureka Park also illustrates the claim that great startups start anywhere. Companies we talked with came from a wide variety of cities in the US including Kansas City, Arlington, Cincinnati, and Detroit. Exhibitors from the Netherlands, Canada, and France — among many other foreign countries– had a large presence in Eureka Park.

It’s been about six weeks since we returned to Phoenix after participating in Kansas City Techstars, meaning this post is pretty overdue. I’ve had decent time to reflect on my time in program and wanted to share three Techstars experiences that remain valuable as I continue working at Somatic Labs and in the start-up community. Keep reading

We have an exciting annoucement to make: we’ve just released an early preview of our SDK! If you’re a software developer and have been waiting to start developing for Moment, here’s your first look. We’re excited to see what you’ll make!

The SDK is still under development, and it’s likely to change in the coming months.

Introduction

This repository contains the Software Development Kit (SDK) for Moment, the wearable device that communicates entirely through your sense of touch.

This SDK contains the code that is executed on the Moment devices inside of a custom JavaScript runtime environment. To simplify the process of creating custom embedded software for Moment, we provide several ready-to-use functions for creating event callbacks, transitioning the LED color, and creating rich haptic effects.

The Da Vinci Surgical System is a robot built by Intuitive Surgical. After being approved for use by the FDA in 2000, it has been adopted by surgeons performing a wide range of minimally invasive procedures, including prostatectomies, cardiac valve repair, and gynecologic procedures. As of June 30, 2014, approximately 3,100 Da Vinci robots were installed worldwide, with each unit costing roughly $2 million. The primary innovation of the Da Vinci system is the surgeon’s console: an immersive visualization system that takes an ordinary laparoscopic image and projects it to a binocular display, enhancing the dexterity with which a surgeon can perform several procedures. For the patient, the Da Vinci system typically provides a reduced amount of pain and blood loss, frequently resulting in a shorter hospital stay and faster recovery period. Continue reading “Haptic Feedback in the Da Vinci Surgical System”

Sensory synesthesia is a neurological phenomenon in which stimuli from one modality of sensory input leads to involuntary and automatic experiences in another sensory modality. There is some debate regarding the classification of synesthetic phenomenon, but several striking observations reveal that at least a small percentage of people experience a heightened interconnectedness between their different senses.

Kiki, Bouba, and Visual Perception

In 1929, the German-American scientist Wolfgang Kohler observed what is now known as the Bouba-Kiki effect [1]. In 2001, Vilyanur S. Ramachandran replicated Kohler’s experiment with college students in the United States and India, and found a large consensus between participants prompted to provide auditory names to visual objects [2]. The findings of Ramachandran and Kohler demonstrate that sensory information appears to carry a predictable and consistent scaffolding of associations and relationships to other modalities of stimuli. The participants’ visual perceptions of the shapes printed on the page were used to make judgments of the appropriate auditory sounds that ought to be associated with those shapes. VS Ramachandran and his colleague Edward Hubbard suggest that the evolution of language may not be entirely arbitrary—instead, the naming of objects in space may reflect a natural association of auditory stimuli with the visual, tactile, olfactory, and overall perception of the object’s nature. Sounds (and by extension, all sensory information) may automatically convey some degree of symbolic meaning in relation to experiences from other senses.

Auditory-Tactile Synesthesia

When viewed with an MRI, the brain activity of a patient with a localized lesion in the right ventrolateral nucleus of the thalamus revealed modifications to the individuals’ perception. “Initially, the patient was more likely to detect events on the contralesional side when a simultaneous ipsilesional event was presented within the same, but not different sensory modality.” Eventually, this transformed into a form of synesthesia “in which auditory stimuli produce tactile percepts.” This study revealed the likelihood that the experience of sensory synesthesia may be acquired after a brain injury [3].

Visual-Tactile Synesthesia

Mirror-touch synesthesia is a condition in which watching another person being touched activates a similar neural circuit to actual touch. When observing individuals who experience mirror-touch synesthesia with brain imaging, their empathic responses to the experiences of other people appears to be heightened [4]. This form of synesthesia also appears to augment an individual’s ability to recognize and interpret the facial expressions of an interaction partner [5]. Although a thorough empirical explanation for the phenomenon has not yet been developed, there are different potential theoretical explanations currently being investigated in more detail. The Threshold Theory explains it “in terms of hyper-activity within a mirror system for touch and/or pain,” and the Self-Other Theory explains it “in terms of disturbances in the ability to distinguish the self from others.” [6] The two theories carry different implications: the Threshold Theory implies a localized phenomenon impacting the mirror system, while the Self-Other theory implies a more general difference that may be reflected in other cognitive processes as well.

Enhanced Sensory Perception

Some scholars argue that artistic experimentation may be rooted in sensory synesthesia, by allowing an artist to describe a sensory experience using a wider range of detail [7]. Although scientists have developed methods of testing and profiling synesthetes [8], much of the theoretical framework used to understand cross-modal sensory perception remains speculative. Although VS Ramachandran mentions a possible relationship between synesthesia and enhanced sensory perception [9], it remains unclear exactly how this enhancement manifests itself in a person’s ability to perform different activities or pursue artistic endeavors. In a preliminary study exploring the perceptual processing abilities of synaesthetes [10], “there was a relationship between the modality of synaesthetic experience and the modality of sensory enhancement.” In other words, a synaesthete who experiences color triggered by other sensory modalities will also have enhanced color perception. A synaesthete who experiences tactile sensations will have enhanced tactile perception. Further research is required to understand exactly how these enhanced perceptual abilities manifest themselves in common tasks.

Adafruit provides a breakout board for the DRV2605 haptic driver from Texas Instruments. Although the example tutorial included with the product describes a quick way to set up the driver with an eccentric rotating mass (ERM) motor, we prefer using a linear resonant actuator (LRA) for increased precision and enhanced haptic feedback. You can use the breakout board with an Arduino Uno to quickly make a prototype of a system that delivers precise vibrotactile cues.

Additional Resources

Creating Haptic Feedback

Step 1: Soldering

Solder the header strip onto the breakout board, and solder the LRA onto the breakout board. After this step, your DRV2605 breakout board should look like this:

Step 2: Wiring and Hookup

Connect VIN on the DRV2605 to the 5V supply of the Arduino

Connect GND on the DRV2605 to GND on the Arduino

Connect the SCL pin to the I2C clock SCL pin on your Arduino, which is labelled A5

Connect the SDA pin to the I2C data SDA pin on your Arduino, which is labelled A4

Connect the IN pin to an I/O pin, such as A3

Step 3: Testing and Creating Effects

Adafruit provides a very useful Arduino library for the DRV2605 that you can use to get started. In particular, we recommend looking through the example code to get an idea of the effects you can produce. In page 57 and 58 of the DRV2605 datasheet, you can find a table of all the effects you can produce “out of the box.”

Step 4: Creating Your Own Waveforms

Since you can also set the intensity of the LRA in realtime, you can design your own waveforms and effects by changing the value over time. Adafruit also provides an example for setting the value in realtime on Github. You can combine this example code with a waveform design tool like Macaron to customize the feedback provided by your new Arduino-powered haptic device!

We work with local companies whenever we can. For manufacturing and assembly, we work with Quiktek Assembly in Tempe, Arizona. For component sourcing, we work with Avnet, a leading electronics distributor headquartered in Phoenix. Many of our primary partners are within a quick 15-minute drive from our office, and we also are working to source all of our plastics and miscellaneous parts from local distributors.

Beyond keeping Americans employed, we can guarantee a few things almost every big brand (including the ones named after fruit) cannot:

we pay fair wages

we never employ underage workers

our facilities are powered by cleaner sources of energy

we recycle whenever possible

we meet all EPA regulations

We produce and assemble our products in the United States, and we’re always looking for opportunities to bring jobs back here to the USA. It’s the only way we can ensure we deliver an honest, high-quality product that isn’t subsidized by environmental catastrophe and unfair practices. Continue reading “Moment is Made in the USA”

We asked ourselves: what do they all have in common? They all had a video with excellent production value – a video that could cost anywhere from $25,000 to $100,000 or more depending on whether or not the actors were paid.

As a startup that’s bootstrapped and hasn’t raised a large round of investment, we needed to get creative. We used $2,000 of our savings to film a video that could have easily cost 10x as much. We recruited a bunch of our talented friends who are musicians, dancers, researchers, and body builders. Then, we filmed footage and edited until we reached our final iteration:

The wait is over. We’ve finished the design, iterated on the hardware, and written thousands of lines of code. Now, we’re ready to start collecting pre-orders for Moment, the first device that communicates entirely through your sense of touch.
For the first 24 hours, backers will receive a special early bird price of $99 — you won’t be able to get this price anywhere else, ever again.

Spread the word.

Help us bring Moment to as many people as possible. Share Moment with your friends on Facebook, Twitter, Instagram, or elsewhere!