Augmented reality (AR) is an emerging technology that enhances, or augments, a user’s perception of the real world. The enhancement can be delivered through sight, sound, or touch and provides additional information about a person’s environment.

Beyond consumer and entertainment applications, AR is highly useful for improving efficiency, safety, and productivity in the workplace by providing a user with important information (i.e. sensor data, inventory information, heat mapping) that is not naturally perceivable from the environment. Many industries including construction, medical, manufacturing, and defense sectors have begun to invest and develop AR technologies that enhance a user’s ability to complete a task. Recent partnerships between HTC Vive and AECOM, developments by the US military, and projects supported by Medtronic illustrate how AR heads-up displays offer a powerful method for conveying information needed to plan a construction site, learn a medical procedure, or successfully execute a military training exercise.

In addition to these on-Earth settings, AR has similar applications for improving operational tasks during space exploration. Astronauts stationed on the International Space Station (ISS) have already experimented using Microsoft’s Hololens to complete tasks.
Feedback from astronauts that tested the Hololens and NASA’s roadmap plans indicate that AR could be used to (1) superimpose instructions or illustrations that guide an astronaut through maintenance repairs, and (2) enable remote visibility between an astronaut and ground operator to solve an issue together in real-time. The Hololens is also being used to plan the next Mars mission. Additional projects such as AMARIS and open calls for AR solutions highlight NASA’s interest in developing AR technologies and AR’s growing ability to support unique conditions in aeronautic and space missions.

AR technologies, however, are not all visually-based or delivered through a special headset. We responded to one of NASA’s challenges and offered our own AR solution: communication through the sense of touch. Not all situations enable clear communication through sight and sound, and thus require a method of information delivery beyond these two channels. Our technology conveys information felt on the surface of the skin as tactile, or haptic, feedback and allows a person to receive important data even when their eyes and ears are unavailable. More than an ordinary tap or buzz, users feel unique patterns corresponding to information important to them.

Our recent participation at the 2018 NASA iTech Cycle I Forum highlighted a few NASA use-cases of our technology. Though haptic feedback has been incorporated in joysticks used in training simulations, our technology expands the number of applications that haptic feedback provides operational enhancement. Addressing the A in NASA (Aeronautics), our technology could allow pilots to receive unique tactile alerts corresponding to sensor information needed for a safe flight without requiring their visual attention. In space, our technology could convey similar information to astronauts whose space suits limit use of most current AR form factors, including headsets.

As AR continues to evolve, it introduces a growing list of potential applications to both improve operational workflow and enhance user experience. We’ll see how the technology continues to evolve but perhaps at the next Space X launch, AR developments will allow viewers who aren’t present at Cape Canaveral to experience the feeling of a rocket shaking the Earth as it accelerates into orbit.

]]>https://blog.somaticlabs.io/augmented-reality-in-space/feed/0Thoughts on CES 2018https://blog.somaticlabs.io/thoughts-on-ces-2018/
https://blog.somaticlabs.io/thoughts-on-ces-2018/#respondWed, 17 Jan 2018 16:22:51 +0000https://blog.somaticlabs.io/?p=581Continue reading "Thoughts on CES 2018"]]>2018 is off with a bang, and so are we! We spent this past week in Las Vegas demo-ing our technology at CES and had a great time talking with fellow entrepreneurs and technologists. Here’s a few key takeaways from our CES experience.

Great Startups Start Anywhere

During CES, Shantanu spoke on a Techstars panel themed “Great Startups Start Anywhere” to share the benefits and reasons of why companies decide to start outside of Silicon Valley. In addition to representing our home state, Shantanu highlighted the importance of the networks and resources we have in Arizona, and how Phoenix is a becoming start-up city. The diversity in company headquarters throughout Eureka Park also illustrates the claim that great startups start anywhere. Companies we talked with came from a wide variety of cities in the US including Kansas City, Arlington, Cincinnati, and Detroit. Exhibitors from the Netherlands, Canada, and France — among many other foreign countries– had a large presence in Eureka Park.

Meeting Other Eureka Park Entrepreneurs

Though a bit overwhelming at first, Eureka Park was our favorite section of CES. In addition to picking up free Arduinos, we appreciated the ability to speak directly with the innovators that developed the products at each of the booths. We enjoyed being able to learn more about the ideas that started the product and the use-cases each company wanted to use their technology for. Meeting other people who are also developing VR and AR technology was a motivating and humbling reminder that our platform, which augments reality for your sense of touch, is at the forefront of new technology development.

Inbound Interest for Using Our SDK

Lastly, having a booth in Eureka Park allowed us to give live demos of our technology and introduce attendees to the industry and consumer products applications of our haptic platform. We were excited to see the inbound interest from automotive companies, other Eureka park startups, and individual software developers that want to use our SDK and hardware to add haptic feedback to the projects they are working on. Additionally, watching the reactions people had for the first time when trying our form factors is always a fun experience!

If you were also at CES, we hope you enjoyed the show as much as we did. If not, check out the many articles published online for a rundown of the highlights!

]]>https://blog.somaticlabs.io/thoughts-on-ces-2018/feed/0Techstars, as experienced by Employee 1https://blog.somaticlabs.io/techstars-as-experienced-by-employee-1/
https://blog.somaticlabs.io/techstars-as-experienced-by-employee-1/#respondMon, 04 Dec 2017 21:10:10 +0000https://blog.somaticlabs.io/?p=566Continue reading "Techstars, as experienced by Employee 1"]]>It’s been about six weeks since we returned to Phoenix after participating in Kansas City Techstars, meaning this post is pretty overdue. I’ve had decent time to reflect on my time in program and wanted to share three Techstars experiences that remain valuable as I continue working at Somatic Labs and in the start-up community.

Onboarding

Having officially joined the team just a few weeks prior to the start of Techstars, I arrived in Kansas City still learning the workflow of Somatic Labs. I was daunted by the prospect of immediately meeting with dozens of mentors per week. I hadn’t practiced the company’s elevator pitch, learned the team’s frequently-used phrases, or familiarized myself with widely referenced investors and VC firms. Yet I didn’t have time to feel as apprehensive as I usually would in a new environment because, true to form, the Techstars program accelerated beginning day two.

The Techstars schedule became my employee training and onboarding process. Attending mentor meetings allowed me to practice and improve my version of the company pitch. Through program workshops, I expanded my start-up vocabulary and learned more about how investors and VC firms operate. Daily debriefs within

our team clarified my technical questions about our platform’s design and allowed me to share my thoughts on product-market fit. My daily Google searches for relevant funding opportunities and potential partner organizations during Techstars outlined what my role encompasses now: writing applications for non-dilutive funding sources, demo-ing our technology to potential customers and partner organizations, and performing market validation. As employee 1 at an early-stage Techstars company, I had, and still do, the privilege and opportunity to outline my responsibilities based on the intersection of what I find interesting and what grows the business.

Cohort Culture

One of my favorite characteristics about the Kansas City Techstars cohort was the diversity in age and previous expertise among the ten participating companies. Though half of the teams were all male, the average age of our cohort was closer to 30. The company founders had previous experience in corporate operations, construction, education, investment banking, kinesiology, software development, law, and sustainable energy. Because I tend to think first of gender or race as indicators of diversity, I appreciated having the opportunity to learn from cohort friends who were at a different stage of life and in their career. Working alongside PhDs, self-made and successful entrepreneurs, and industry experts who didn’t snub or discount me for being a recent college grad made the start-up community feel more supportive and inviting than I had originally thought.

The Team

Figuring out how I could weave my personality, skills, and work style into the co-founders’ existing workflow was hurdle I hadn’t anticipated. I’ve known all three co-founders since 2013 when I started at Arizona State University. We met through the Flinn Scholars Program and became friends at different points in undergrad. Because I was already friends with Jake, Ajay, and Shantanu, adding the professional work dimension to each relationship was a bit of a weird adjustment. This transition wasn’t due to an unwelcoming environment (Jake, Ajay, and Shantanu are kind, friendly, and humorous), but rather because I was used to interacting with them in social settings, not an office space.

Techstars helped me quickly integrate into the team’s workflow and culture because participating in program was an experience the four of us shared. Even though I was the newest addition to Somatic Labs, we were all new to the accelerator lifestyle. All of us had to adjust to the co-working space, mentor meetings (i.e. mentor madness), and program schedule. The fast-paced days and newly adopted organizational habits allowed me to learn and grow with the guys and become a four-person team.

Additionally, the move to Kansas City itself was a bonding experience. Raised in Arizona for most of our lives, we were unaccustomed to Midwest weather and BBQ, and spent many weekends enjoying the outdoors and sampling KC’s many BBQ restaurants. Moreover, having a team to share ordinary life activities was a key component in becoming truly comfortable as identifying myself as part of Somatic Labs. Little things like shopping at the farmer’s market with Ajay, getting an afternoon iced coffee with Shantanu, or going to yoga classes with Jake made me feel more connected to the co-founders and better able to balance being both a friend and colleague.

Final Thoughts

The final observation I’d like to share from my Techstars experience is one that might have already been made but is still important to keep in mind. The start-up ecosystem is designed for growth, whether it be in revenue, market size, or skillset. Though it’s a competitive and high-risk environment, the time and effort are worth it as long as you perceive the knowledge and experience you gain as continually trending up and to the right.

]]>https://blog.somaticlabs.io/techstars-as-experienced-by-employee-1/feed/0Early Preview of Moment SDKhttps://blog.somaticlabs.io/early-preview-of-moment-sdk/
https://blog.somaticlabs.io/early-preview-of-moment-sdk/#respondFri, 20 Jan 2017 20:50:56 +0000https://blog.somaticlabs.io/?p=543Continue reading "Early Preview of Moment SDK"]]>We have an exciting annoucement to make: we’ve just released an early preview of our SDK! If you’re a software developer and have been waiting to start developing for Moment, here’s your first look. We’re excited to see what you’ll make!

The SDK is still under development, and it’s likely to change in the coming months.

Introduction

This repository contains the Software Development Kit (SDK) for Moment, the wearable device that communicates entirely through your sense of touch.

This SDK contains the code that is executed on the Moment devices inside of a custom JavaScript runtime environment. To simplify the process of creating custom embedded software for Moment, we provide several ready-to-use functions for creating event callbacks, transitioning the LED color, and creating rich haptic effects.

The Da Vinci Surgical System is a robot built by Intuitive Surgical. After being approved for use by the FDA in 2000, it has been adopted by surgeons performing a wide range of minimally invasive procedures, including prostatectomies, cardiac valve repair, and gynecologic procedures. As of June 30, 2014, approximately 3,100 Da Vinci robots were installed worldwide, with each unit costing roughly $2 million. The primary innovation of the Da Vinci system is the surgeon’s console: an immersive visualization system that takes an ordinary laparoscopic image and projects it to a binocular display, enhancing the dexterity with which a surgeon can perform several procedures. For the patient, the Da Vinci system typically provides a reduced amount of pain and blood loss, frequently resulting in a shorter hospital stay and faster recovery period.

The Da Vinci provides surgeons with haptic feedback as they perform a surgery, but utilizes sensory substitution to display the information through visual cues. Ideally, tactile feedback from the device would render the exact applied forces and tissue deflections resulting from the surgical procedure. Even though the haptic feedback is displayed visually (a form of tactile-visual sensory substitution), it can still augment a surgeon’s performance. A 2004 study demonstrated significantly greater and more consistent tensions applied to suture materials, without breakage, during robotic knot tying enhanced with sensory substituted feedback compared to knots tied without feedback [1]. Further research is warranted to engineer surgical robots that provide direct kinesthetic feedback to the user’s hands [2].

Kiki, Bouba, and Visual Perception

In 1929, the German-American scientist Wolfgang Kohler observed what is now known as the Bouba-Kiki effect [1]. In 2001, Vilyanur S. Ramachandran replicated Kohler’s experiment with college students in the United States and India, and found a large consensus between participants prompted to provide auditory names to visual objects [2]. The findings of Ramachandran and Kohler demonstrate that sensory information appears to carry a predictable and consistent scaffolding of associations and relationships to other modalities of stimuli. The participants’ visual perceptions of the shapes printed on the page were used to make judgments of the appropriate auditory sounds that ought to be associated with those shapes. VS Ramachandran and his colleague Edward Hubbard suggest that the evolution of language may not be entirely arbitrary—instead, the naming of objects in space may reflect a natural association of auditory stimuli with the visual, tactile, olfactory, and overall perception of the object’s nature. Sounds (and by extension, all sensory information) may automatically convey some degree of symbolic meaning in relation to experiences from other senses.

Auditory-Tactile Synesthesia

When viewed with an MRI, the brain activity of a patient with a localized lesion in the right ventrolateral nucleus of the thalamus revealed modifications to the individuals’ perception. “Initially, the patient was more likely to detect events on the contralesional side when a simultaneous ipsilesional event was presented within the same, but not different sensory modality.” Eventually, this transformed into a form of synesthesia “in which auditory stimuli produce tactile percepts.” This study revealed the likelihood that the experience of sensory synesthesia may be acquired after a brain injury [3].

Visual-Tactile Synesthesia

Mirror-touch synesthesia is a condition in which watching another person being touched activates a similar neural circuit to actual touch. When observing individuals who experience mirror-touch synesthesia with brain imaging, their empathic responses to the experiences of other people appears to be heightened [4]. This form of synesthesia also appears to augment an individual’s ability to recognize and interpret the facial expressions of an interaction partner [5]. Although a thorough empirical explanation for the phenomenon has not yet been developed, there are different potential theoretical explanations currently being investigated in more detail. The Threshold Theory explains it “in terms of hyper-activity within a mirror system for touch and/or pain,” and the Self-Other Theory explains it “in terms of disturbances in the ability to distinguish the self from others.” [6] The two theories carry different implications: the Threshold Theory implies a localized phenomenon impacting the mirror system, while the Self-Other theory implies a more general difference that may be reflected in other cognitive processes as well.

Enhanced Sensory Perception

Some scholars argue that artistic experimentation may be rooted in sensory synesthesia, by allowing an artist to describe a sensory experience using a wider range of detail [7]. Although scientists have developed methods of testing and profiling synesthetes [8], much of the theoretical framework used to understand cross-modal sensory perception remains speculative. Although VS Ramachandran mentions a possible relationship between synesthesia and enhanced sensory perception [9], it remains unclear exactly how this enhancement manifests itself in a person’s ability to perform different activities or pursue artistic endeavors. In a preliminary study exploring the perceptual processing abilities of synaesthetes [10], “there was a relationship between the modality of synaesthetic experience and the modality of sensory enhancement.” In other words, a synaesthete who experiences color triggered by other sensory modalities will also have enhanced color perception. A synaesthete who experiences tactile sensations will have enhanced tactile perception. Further research is required to understand exactly how these enhanced perceptual abilities manifest themselves in common tasks.

Adafruit provides a breakout board for the DRV2605 haptic driver from Texas Instruments. Although the example tutorial included with the product describes a quick way to set up the driver with an eccentric rotating mass (ERM) motor, we prefer using a linear resonant actuator (LRA) for increased precision and enhanced haptic feedback. You can use the breakout board with an Arduino Uno to quickly make a prototype of a system that delivers precise vibrotactile cues.

Additional Resources

Creating Haptic Feedback

Step 1: Soldering

Solder the header strip onto the breakout board, and solder the LRA onto the breakout board. After this step, your DRV2605 breakout board should look like this:

Step 2: Wiring and Hookup

Connect VIN on the DRV2605 to the 5V supply of the Arduino

Connect GND on the DRV2605 to GND on the Arduino

Connect the SCL pin to the I2C clock SCL pin on your Arduino, which is labelled A5

Connect the SDA pin to the I2C data SDA pin on your Arduino, which is labelled A4

Connect the IN pin to an I/O pin, such as A3

Step 3: Testing and Creating Effects

Adafruit provides a very useful Arduino library for the DRV2605 that you can use to get started. In particular, we recommend looking through the example code to get an idea of the effects you can produce. In page 57 and 58 of the DRV2605 datasheet, you can find a table of all the effects you can produce “out of the box.”

Step 4: Creating Your Own Waveforms

Since you can also set the intensity of the LRA in realtime, you can design your own waveforms and effects by changing the value over time. Adafruit also provides an example for setting the value in realtime on Github. You can combine this example code with a waveform design tool like Macaron to customize the feedback provided by your new Arduino-powered haptic device!

]]>https://blog.somaticlabs.io/getting-started-with-haptic-feedback-arduino-guide/feed/0Moment is Made in the USAhttps://blog.somaticlabs.io/moment-is-made-in-the-usa/
https://blog.somaticlabs.io/moment-is-made-in-the-usa/#respondWed, 21 Sep 2016 17:46:36 +0000https://blog.somaticlabs.io/?p=503Continue reading "Moment is Made in the USA"]]>
We’re proud to be based in Phoenix, Arizona. Our main office is located at the Center for Entrepreneurial Innovation, but we can’t contribute our part to the American economy by shipping jobs overseas. That’s why Moment is made in the USA.

We work with local companies whenever we can. For manufacturing and assembly, we work with Quiktek Assembly in Tempe, Arizona. For component sourcing, we work with Avnet, a leading electronics distributor headquartered in Phoenix. Many of our primary partners are within a quick 15-minute drive from our office, and we also are working to source all of our plastics and miscellaneous parts from local distributors.

Beyond keeping Americans employed, we can guarantee a few things almost every big brand (including the ones named after fruit) cannot:

we pay fair wages

we never employ underage workers

our facilities are powered by cleaner sources of energy

we recycle whenever possible

we meet all EPA regulations

We produce and assemble our products in the United States, and we’re always looking for opportunities to bring jobs back here to the USA. It’s the only way we can ensure we deliver an honest, high-quality product that isn’t subsidized by environmental catastrophe and unfair practices.

Step 2. Development

Our product development cycle touches several different programming languages

Go, powers the backend of our website and firmware deployment serve

C, powers our embedded software and firmware stack

Python, powers our dev ops and scripting needs

Swift, powers our iOS app

Java, powers our Android app

HTML/CSS, powers our website and keeps it beautiful (with some Pure.css to keep things in line)

JavaScript, powers the interactive elements of our website

We take pride in putting together our technology stack from the ground-up – firmware, web server, and application code.

Step 3. Production

We work with Phoenix Analysis and Design Technologies to create small batches of mechanical parts on industrial 3D printers. We use an injection molding process from Proto Labs to test our mechanical design more thoroughly. Our final product assembly and production are handled by Quiktek Assembly, Inc.

]]>https://blog.somaticlabs.io/moment-is-made-in-the-usa/feed/0How We Filmed our Crowdfunding Video for Under $2,000https://blog.somaticlabs.io/how-we-filmed-our-crowdfunding-video-for-under-2000/
https://blog.somaticlabs.io/how-we-filmed-our-crowdfunding-video-for-under-2000/#commentsFri, 16 Sep 2016 22:47:19 +0000https://blog.somaticlabs.io/?p=475Continue reading "How We Filmed our Crowdfunding Video for Under $2,000"]]>We’re a small startup, and we don’t have a large marketing budget. When we started working on our crowdfunding campaign, we took a look at a few of the most successful crowdfunded products:

We asked ourselves: what do they all have in common? They all had a video with excellent production value – a video that could cost anywhere from $25,000 to $100,000 or more depending on whether or not the actors were paid.

As a startup that’s bootstrapped and hasn’t raised a large round of investment, we needed to get creative. We used $2,000 of our savings to film a video that could have easily cost 10x as much. We recruited a bunch of our talented friends who are musicians, dancers, researchers, and body builders. Then, we filmed footage and edited until we reached our final iteration:

Supplies

Camera – $250 rental

Jake already had a Nikon D7000 camera for photography, which shoots 1080p HD video, so we filmed our entire video on his DSLR. If you don’t already have a DSLR that shoots HD video, you can rent them online at a huge discount to buying one.

Camera Lens – $560 rental

Camera lenses can cost a lot of money – upwards of $2,000. This would have eaten away most of our budget. Thankfully, we were able to find a good online rental service (LensRentals.com) that allowed us to rent two extremely high-quality lenses for under $600. We rented a Nikon 14-24mm f/2.8 and a Nikon 24-70mm f/2.8, and used the lenses for a month to film the different shots in our video. The quality is surprisingly good.

Camera Track – $110

The most effective video shots are dynamic — they feature a moving camera that draws attention to the most impactful subjects in a video. Rather than only have simple pans, we wanted to create a more dynamic aesthetic by using a track — the shots depicted below wouldn’t be possible without one:

Handheld Stabilizer – $30

For shots taken off of the track or tripod, camera stability is essential. While calibrating it can be a bit of a hassle to say the least, an inexpensive camera stabilizer will save you a lot of headache in post-processing.

Tripods (2x) – $60

Jake already had three tripods we used for filming and mounting the camera track. This allowed us to capture stable shots without needing too much extra equipment, but a typical tripod will run about $30.

Microphone – $129

Good video needs good audio. Ajay and Shantanu both already had Blue Yeti microphones, which we used to record narrations and spoken parts. We also got pop filters and a wind filter for the microphones to make sure the sound quality was perfect. As you can see in the picture, we made heavy use of large blankets to reduce echoes and dampen the noise of the room. Although this doesn’t give the same audio quality you get from a vocal booth and a professional-grade setup, it provides a passable approximation.

Lightbox – $40

Proper lighting is essential for any type of photography, product photography being no exception. Thankfully, there are really reasonably priced solutions for lighting small-to-medium sized products. We picked up a lightbox for $40 on Amazon, which let us take awesome photos of Moment.

Adobe Creative Cloud – $60

Now that Adobe has switched to offering a monthly Creative Cloud subscription instead of a large one-time software purchase, we were able to easily afford the Creative Cloud installations for After Effects and Premiere Pro without a problem.

Halcyon Days by Mokhov – $600

Videohive Messaging Effects – $28

We wanted to add very subtle effects to enhance the video, so we decided to use the Videohive Text Messages package, which provided us with an easy way to add subtle animations to our video in Adobe After Effects.

Grand Total: $1,897

Locations

Freeway overpass above the I-10 in Phoenix.Biking around Downtown Phoenix with a tripod and track.Filming the manufacture of our circuit boards at QuikTek Assembly.Planning and editing footage at the Center for Entrepreneurial Innovation.Filming Anthony Kelly (professional dancer) at a downtown shipping container.Filming Anthony Brant (musician and audio engineering specialist) at Epicentre Records.Video editing at a coffee shop, and squeezing in a few extra B-roll shots.Filming at Hole in the Rock.Filming a body builder at the Arizona State University gym.Adjusting the exposure and color balancing clips.Rendering our video on a Mac Pro.Messy office after rearranging for filming.Dinner at Ike’s Love and Sandwiches after finishing filming.
]]>https://blog.somaticlabs.io/how-we-filmed-our-crowdfunding-video-for-under-2000/feed/2Pre-Orders Start Todayhttps://blog.somaticlabs.io/launch/
https://blog.somaticlabs.io/launch/#respondMon, 12 Sep 2016 07:00:29 +0000https://blog.somaticlabs.io/?p=470Continue reading "Pre-Orders Start Today"]]>The wait is over. We’ve finished the design, iterated on the hardware, and written thousands of lines of code. Now, we’re ready to start collecting pre-orders for Moment, the first device that communicates entirely through your sense of touch.
For the first 24 hours, backers will receive a special early bird price of $99 — you won’t be able to get this price anywhere else, ever again.

Spread the word.

Help us bring Moment to as many people as possible. Share Moment with your friends on Facebook, Twitter, Instagram, or elsewhere!