Categories

Tag: mobile

The Seattle-based startup, an angel-funded team of five, has been developing AI for stable managers and riders to monitor the health and security of horses from video feeds.

Image recognition has been a boon to agriculture businesses, including those in the cattle industry. Magic AI corrals algorithms for its AI-powered software to monitor video and help better manage horses, streaming the video to its servers for processing.

Magic AI founder and CEO Alexa Anthony knows the needs of horse owners. The daughter of a horse trainer, she grew up riding in the Seattle area and is a former NCAA national champion in horse jumping.

“If you have a Lamborghini, you have it in a garage with an alarm. Horses are often times in a barn in remote places without any security that they are ok when you are sleeping,” she said.

Magic AI’s StableGuard, a system of cameras that works with a mobile app to keep tabs on horses, provides GPU-driven video monitoring and emergency alerts. StableGuard can be configured to recognize riders and staff of stables as well as to send alerts if strangers enter.

Horse Data Hurdle

Building StableGuard wasn’t easy. The developers at Magic AI initially couldn’t find enough publicly available horse images to adequately train its deep neural networks. They began with MXNet and horse images from the classic ImageNet that proved problematic.

“They actually trained abysmally because the angle of our cameras is overhead, very different than ImageNet,” said Kyle Lampe, vice president of engineering at Magic AI “That really threw off most of the things we used to train.”

Magic AI’s developers relied heavily on transfer learning to add to a number of different image classification networks. Lampe said that with enough new data — “terabytes and terabytes” of video images — they were able to successfully build on top of networks that had already been trained and used in competitions.

“When you do transfer learning, you’re putting images in after the fact and it is applying everything that it has learned before,” he said.

Developers at Magic AI relied on GPUs on desktop as well as on AWS to handle the hefty training workloads on the deep neural networks.

Horse Health Results

The original inspiration for Magic AI came when Anthony’s horse died of colic. Colic symptoms are fairly easy to spot — rolling on the ground, kicking at the stomach, pawing on the ground — and can be identified with image classification algorithms.

Today, Magic AI is adding to a growing list of health indicators for customers to track using its StableGuard system. StableGuard enables customers to keep track of how often horses are eating and drinking, on their feet, and whether they are blanketed when it’s cold, offering more ways to support horse wellness.

The company can also alert horse owners to signs that an animal is close to giving birth. “We can see signs that are indicative of birth. And then you can look at the live feed on your phone,” said Anthony.

Magic AI has a pilot customer, Thunderbird Show Park, in British Columbia, Canada. On that site, StableGuard is in 120 horse stalls. It offers the horse-monitoring service for $15 a day to those there for horse-jumping tournaments and other events.

Most of these sites are powered by a GPU on site, sizable hard drive storage and other computing resources to run Magic AI’s service. “I am excited to see how this technology can improve the wellness of animals globally,” said Anthony.

The Toronto-based startup is developing an NVIDIA AI-powered mapping platform that can help engineers assess tunnel stability in mines and construction.

Today, geologists and engineers visually assess the risks of rock formations by standing five meters away from the rock as a safety precaution. That isn’t ideal for ensuring accurate results, said Shelby Yee, CEO and co-founder of RockMass.

“What they are doing right now takes about 90 minutes, and our technology can do it in about five minutes,” said Yee.

RockMass is using engineers in the field test out its hand-held unit, the Mapper. It’s aimed at those in mining, geological exploration and civil engineering. The startup is developing the AI platform for robots, drones and handheld devices used to capture geological data.

The startup’s Mapper AI device now offers a safer way to keep engineers further away from a possible tunnel collapse as well as offers a faster system for gathering and processing data. Robots and drones using its platform could go into even more hazardous areas.

RockMass customers include Brazilian mining company Nexa Resources, which seeks increased automation and safety with use of the startup’s technology.

AI for Geotech

Engineers have for years surveyed the angles of rock surfaces using conventional equipment such as a theodolite scope-like device on a tripod to take optical measurements. They seek out so-called planes of weaknesses, which identify failure points within tunnels and rock formations.

The engineers measure the surfaces of rock formations to collect data for building what are known as stereonets. Stereonets map three-dimensional forms, such as a boulder, for viewing in a two-dimensional display.

Engineers traditionally take the data from a site back to the office to transfer onto a computer to create a stereonet.

The startup’s technology promises an easier way. Its handheld device is packed with sensors for such measurements. Its lidar sensor and inertial measurement unit map the orientation on planes of weaknesses in rock formations. And it can do this in underground environments lacking GPS, wireless communication and light.

RockMass’s software relies on the information provided by these sensors to quickly identify useable data for engineers within minutes. The company is working to capture and process the data on the spot for field engineers. “You are able to see the data in real time,” Yee said.

‘Computationally Demanding’ AI

RockMass’s platform for onsite data collection is computationally demanding, said CTO and co-founder Stuart Bourne. The company’s devices sport robotics capabilities from NVIDIA Jetson and rely on its support for CUDA, cuDNN and TensorRT software libraries.

“Jetson has very high computational power relative to how much energy it draws,” Bourne said.

The startup enlists CUDA libraries to get the real-time processing to work with the data on cloud instances running NVIDIA GPUs for processing stereonets for customers.

“Nobody is able to collect and process the data in the way that we do,” Yee said. “We are able to process in the cloud in real time simply because of the power of the GPUs.”

RockMass plans to further develop its drones and robots to launch pilots next year.

What’s not to like about food delivered by a cheery little robot in about 30 minutes.

That’s the attraction of Kiwi Bot, a robot hatched by a Colombian team now in residence at the University of California, Berkeley’s Skydeck accelerator, which funds and mentors startups.

The startup has rolled out 50 of its robots — basically, computerized beer coolers with four wheels and one cute digital face — and delivered more than 12,000 meals. They’re often seen shuttling food on Cal’s and Stanford University’s campuses.

Kiwi Bot has been something of a sidewalk sensation and won the hearts of students early on with promotions such as its free Soylent and Red Bull deliveries (check out the bot’s variety of eye expressions).

Kiwi Bot customers use the KiwiCampus app to select a restaurant and menu items. Food options range from fare at big chains such as Chipotle, McDonald’s, Subway and Jamba Juice to generous helpings of favorites from local restaurants. Kiwi texts customers on their order status and expected time of arrival. Customers receive the food with the app, and a hand gesture in front of Kiwi opens its hatch. Deliveries are available between 11 am and 8 pm.

Kiwi is partnered with restaurant food delivery startup Snackpass, delivering to its customers for the same $3.80 fee. For now, the delivery robots are only available around the UC Berkeley and Stanford campuses.

Reinventing Food Delivery Model

Kiwi is aimed at a unique human-and-robotics delivery opportunity. In Colombia, like in other parts of the world, it’s normal to get fast and cheap deliveries by bicycle service, said Kiwi co-founder and CEO Felipe Chavez. “Here it’s by car, and the the delivery fees are like $8. That gave me the curiosity to explore the unit economics of the delivery.”

Chavez and his team moved their startup — originally for food delivery by people — from Bogota to Berkeley in 2017 and applied to the Skydeck program.

The Kiwi team has ambitious plans. The company is working to develop a smooth connection between people, robots and restaurants, addressing the problem with three different bots. Its Kiwi Restaurant Bot is waist high — think R2-D2 in Star Wars — and has an opening at the top for restaurant employees to drop in orders. It then wheels out to the curb for loading.

At the sidewalk, a person unloads meals into a Kiwi Triike, an autonomous and rideable electric pedicab that stores up to four Kiwi Bots loaded with the meals for deliveries. The Kiwi Trike operator can then distribute the Kiwi Bots to make deliveries on sidewalks.

Packing Grub and Tech

Kiwi Bots are tech-laden little food delivery robots. They sport a friendly smile and digital eyes that can wink at people. Kiwi Bots have six Ultra HD cameras capable of 250 degrees of vision for object detection, packing NVIDIA Jetson TX2 AI processors to help interpret all the images of street and sidewalk action for navigation.

Jetson enables Kiwi to run its neural networks and work with its optical systems. “We have a neural network to make sure the robot is centered on the sidewalk and for obstacle avoidance. We can also use it for traffic lights. The GPU has allowed us to experiment,” Chavez said.

The Kiwi team underwent simulation for training its delivery robots. They also enabled object detection using MobileNets and autonomous driving with DriveNet architecture. The company’s deep neural network relied on a convolutional neural network to classify events such as street crossing, wall crashes, falls, sidewalk driving and other common situations to improve navigation.

The Kiwi platform is designed for humans and robots working together. It’s intended to make it so that people can service more orders and do it in a more efficient way.

“It’s humans plus robots making it better,” Chavez said. “We are going to start operating in other cities of the Bay Area next.”

Lean more about artificial intelligence for robotics using NVIDIA Jetson.

Isthmus. Nuclear. Anemone. Tricky English pronunciations are a challenge for many immigrants to the U.S. and native-born speakers alike. ELSA — which stands for English language speech assistant — aims to help with that.

The three-year-old Silicon Valley startup offers an English-pronunciation app, dubbed ELSA Speak, that’s geared toward American English and available for Android and iOS devices.

Vu Van, ELSA’s co-founder and CEO, said English pronunciations plague many seeking careers. A Vietnamese immigrant, Van picked up English early and later went to Stanford University to earn an MBA in 2011, but still struggled with certain words, leading her to start ELSA.

ELSA’s app is designed to be a personalized coach for practicing English, particularly for non-native speakers. It offers bite-sized lessons intended to improve pronunciation with 10 minutes a day of practice.

While most language apps emphasize grammar, “we are very focused on pinpointing your pronunciation errors,” Van said. “It’s supposed to help people with accents.”

She said that having an accent can crush one’s confidence and that working on pronunciation is difficult without the aid of an expensive tutor.

That’s where ELSA comes in. The app uses AI and speech-recognition technology to help people practice English for professional and everyday situations.

Practice Makes Perfect

The coaching app, which enables people to set daily practice reminders, has a slick interface that makes learning easy and fun. The app coach shows a sentence and prompts you to tap the microphone icon and say it. It gives positive feedback in bold, writing EXCELLENT in big green lettering for good pronunciations.

Perhaps even more valuable, it counters mispronunciations with helpful tips to get it right. For example, the language coach offers a number of pointers that help users understand where to place their tongue in their mouth and how to hold their lips when saying particular words.

ELSA is geared to help non-native speakers focus on sentences commonly used in a new job or at a conference, among other professional settings.

The app first takes people through a five-minute assessment test to identify challenges. It offers more than 600 two-minute English lessons and more than 3,000 words for people to practice.

Users’ conversational English lessons are recorded and scored in the app to help gauge their pronunciation level on specific words.

ELSA’s Coaching Evolution

To train its pronounciation model, the company fed thousands of hours of spoken English into a recurrent neural network. It’s now fine-tuning its algorithm and is constantly training with data from users of its app, said Van.

“The more NVIDIA GPUs we have, the more experiments we can run on the model,” she said.

Launched in 2016, ELSA’s apps have been downloaded more than 2 million times. The service is free for the first week, and then requires a subscription to continue beyond limited access. Subscriptions cost $3.99 for a month, $8.99 for three months or $29.99 for a year.

ELSA is a member of the NVIDIA Inception program, a virtual accelerator that offers hardware grants, marketing support and training with deep learning experts.

The startup recently scooped up $3.2 million in venture funding. The founders are seeking additional AI talent to help further build out the service.

The first thing to know about the new Nintendo Switch home gaming system: it’s really fun to play. With great graphics, loads of game titles and incredible performance, the Nintendo Switch will provide people with many hours of engaging and interactive gaming entertainment.

But creating a device so fun required some serious engineering. The development encompassed 500 man-years of effort across every facet of creating a new gaming platform: algorithms, computer architecture, system design, system software, APIs, game engines and peripherals. They all had to be rethought and redesigned for Nintendo to deliver the best experience for gamers, whether they’re in the living room or on the move.

A Console Architecture for the Living Room and Beyond

Nintendo Switch is powered by the performance of the custom Tegra processor. The high-efficiency scalable processor includes an NVIDIA GPU based on the same architecture as the world’s top-performing GeForce gaming graphics cards.

The Nintendo Switch’s gaming experience is also supported by fully custom software, including a revamped physics engine, new libraries, advanced game tools and libraries. NVIDIA additionally created new gaming APIs to fully harness this performance. The newest API, NVN, was built specifically to bring lightweight, fast gaming to the masses.

Gameplay is further enhanced by hardware-accelerated video playback and custom software for audio effects and rendering.

We’ve optimized the full suite of hardware and software for gaming and mobile use cases. This includes custom operating system integration with the GPU to increase both performance and efficiency.

NVIDIA gaming technology is integrated into all aspects of the new Nintendo Switch home gaming system, which promises to deliver a great experience to gamers.

The region’s entrepreneurial energy is on vivid display within the hall, which is partially wrapped with NVIDIA branding. Fledgling companies like DuckyChannel tout its range of keyboards tied to the Chinese zodiac. Groovy Technology Corp. draws the curious with low-cost digital signage. In a carpeted corner, Be Quiet! displays its sound-dampened desktops.

And major hometown Taiwan names from Acer to Zotac are demo’ing NVIDIA technology. More than a dozen companies are among them, including ASUS, Clevo, Colorful, EVGA, Galaxy, Gigabyte, Innovision, Inwin, Leadtek, MSI, Supermicro and Thermaltake, as well as Microsoft.

MSI was one of the major hometown names demo’ing NVIDIA technology at Computex.

One of the biggest hits is NVIDIA’s Pascal-based GPU architecture, which takes gaming and VR to a new level, with the GeForce GTX 1080. Unveiled three weeks ago to broad acclaim and just now shipping, the GeForce GTX 1080 performs 2x faster than Titan X, with 3x its power efficiency.

It’s often getting paired up here with the Oculus Rift and HTC Vive headsets. And it’s driving experiences like EVE Valkyrie, featuring post-apocalyptic dogfighting vaulted into space; The Unspoken, where players conjure fireballs in their right hand while brawling on a Chicago construction site; and Edge of Nowhere, based on tracking a lost expedition in vast stretches of Antarctica.

G-SYNC monitors lit up the exhibition hall.

Also lighting up the exhibition hall are new G-SYNC monitors – which deploy variable frame rates to do away with tearing and stuttering. Operating at 180Hz, the eSports monitors by Acer and ASUS set a new standard for the fastest gaming experience.

NVIDIA’s reach beyond the consumer markets is clear, with our technologies featuring in automobiles and the enterprise space.

Audi’s A4 sedan features a virtual dashboard powered by the NVIDIA Tegra system on a chip.

German automaker Audi is debuting to Taiwan’s wealthy consumers its new A4 sedan, featuring a virtual dashboard powered by the NVIDIA Tegra system on a chip. It’s also giving consumers a peek at its new virtual showroom experience. It uses VR to enable consumers to tour any Audi vehicle – from its ferocious R8 coupe to its capacious Q7 SUV – as if it’s in the room, and to see it in varying styles and colors.

MIT Lab is showing off the fruits of its collaboration with Taiwan’s Institute for Information Industry on a new type of vehicle for urban commuting. Called the Persuasive Electric Vehicle, or PEV, it looks like an electric tricycle with a protective roof. Based on NVIDIA technology, it’s expected to go on trial in Taiwan next year.

Supermicro showed off a system based on NVIDIA’s Tesla M10 GPU accelerators.

And in a sign of how Computex and Taiwan as a whole have evolved far beyond consumer electronics, Gigabyte and Supermicro are displaying systems based on the Tesla M10 GPU accelerators, which drive virtualized applications across multiple desktops.

Progress on the armv7 platform continues, and Jonathan Gray writes in to the arm@ mailing list with some promising news:

There is now a bootloader for armv7 thanks to kettenis@
Recent armv7 snapshots will configure disks to use efiboot and install
device tree dtb files on a fat partition at the start of the disk.

u-boot kernel images are no longer part of the release but can still
be built for the time being. We are going to start assuming the
kernel has been loaded with a dtb file to describe the hardware sometime
soon. Those doing new installs can ignore the details but here they
are.

Marc Gyongyosi isn’t your average college student. The junior computer science major at Northwestern University’s McCormick School of Engineering has thrown himself into the world of lightweight robotics in a way that reaches far beyond the classroom.

Not only has Gyongyosi spent the past two years working with BMW’s robotics research department on developing robotic systems to help factory workers, he’s also involved in two startups. One of those, MDAR Technologies, is working on 3D vision systems for autonomous vehicles.

But it’s his work with the second company, IFM Technologies, which he founded, that landed him on a stage at our annual GPU Technology Conference.

IFM has been working on an autonomous drone that can be reliably operated indoors. Most drones today only fly outdoors because a) they’re too large and clunky to be safely flown indoors, and b) the GPS systems they rely on don’t work indoors. Further complicating the market for outdoor drones is the fact that the FAA must approve them for flight. That’s not the case with indoor drones.

Gyongyosi looked at that convergence of facts and determined that there’s a huge potential market for a commercially available indoor drone. He told GTC attendees that he estimates there are multi-billion-dollar opportunities in areas such as warehouse analytics, utility analysis, insurance inspections, and commercial real estate and construction.

And make no mistake, he’s not in this just to identify those opportunities; he wants to seize them. “We don’t want to just be a research project,” Gyongyosi said during his talk. “We want to be something that goes from problem to solution.”

His solution, however, has presented technical challenges. To start with, he’s had to find an alternative to the GPS built into outdoor drones. He said others have tried motion capture or radio beacons as GPS substitutes, but because he’s trying to keep IFM’s drone small and light, he didn’t want the extra weight. That, plus those options tend to be expensive and need constant calibration.

Similarly, other drones rely on onboard sensors to detect physical objects around them to avoid collision. But that also has presented a major space challenge on IFM’s small drone, as the amount of data that has to be processed is enormous.

“The processing power you need onboard is large,” he said. “That’s why these platforms are very large.”

To combat these issues, Gyongyosi did two things: First, he opted to mount a single camera on the IFM, sacrificing stereoscopic vision but preserving space and keeping the weight down. Then, he choose to incorporate feature tracking that operates somewhat like sensors, but instead uses the data from the camera.

When the performance of that configuration came up short of his expectations, he turned to the GPU, specifically NVIDIA’s Jetson Tegra K1, which is now part of the vehicles physical design.

The results speak for themselves. GPUs are processing the data nearly four times as fast as a CPU. Plus, the feature-tracking rate nearly doubled, from 5.5 Hz to 9.8 Hz. And if that’s not enough, it also improved accuracy and created enough spare space that Gyongyosi was able to add a second camera, which is mounted at a 45-degree angle to the first, trading stereoscopic sight for a larger field of vision.

To further illustrate the potential impact of IFM’s design, Gyongyosi pointed to the colossal failure that is Berlin’s long-planned futuristic airport, a project that was supposed to open years ago but remains non-operational after design flaws were found in the fire detection system during inspection.

Gyongyosi believes indoor drones could have prevented the fiasco by detecting the issue long before inspection, and he hopes IFM’s drones will be performing such tasks soon.

Deploying Jetson TX1 into a final product requires developers to design and manufacture custom carrier boards. But not everyone has the expertise or resources to do so. That changes with the Astro Carrier for Jetson TX1, which CTI introduced today at Embedded World, in Nuremberg, Germany.

Measuring only 57 x 87 mm — about the size of a playing card — Astro Carrier operates in temperatures ranging from -40º to +85ºC. This lets it pack Jetson TX1’s supercomputing punch across a wide range of environmental conditions.

CTI Astro Carrier for Jetson TX1

Astro Carrier connects with off-the-shelf or custom breakout boards, and offers a full array of features, including:

2 Gigabit ports (1 from Jetson TX1 and another from an on-board controller)

1 USB 3.0 and 1 USB 2.0 port

1 HDMI port

Up to 3 camera serial interface channels

Mini PCIe expansion support

CTI also announced a lower cost carrier board, aptly named Elroy. It offers a lighter feature set while retaining the durability the company is known for. Elroy is slated to be available in April.

Designed for Developers

When we launched Jetson TX1 in November, we also released a complete suite of developer documentation that lets third-parties create their own carrier boards for it. The CTI Astro Carrier marks the first entry into the Jetson TX1 ecosystem, making it easier than ever for entrepreneurs, researchers and technologists to deploy our module.

If you’re at Embedded World this week, stop by the Connect Tech booth (Hall 2, Stand 2-318) or the NVIDIA booth (Hall 4A, Stand 4A-646) to see the CTI Astro Carrier for Jetson TX1 in person.

Pricing and availability of Astro Carrier for Jetson TX1 will be announced shortly, with delivery expected in the coming weeks.

If you’re a GeForce gamer, you already have what you need to take advantage of what the Vulkan API can do. If you’re a developer, you will now have the choice of a new tool that will give you more control, and greater performance, on a broad range of devices.

Our support for Vulkan, on the day it launches, not just on multiple platforms, but in cutting-edge games such as The Talos Principle, has some of the industry’s most respected observers taking notice.

“To be able to play a game like The Talos Principle on the same day an API launches, is an unheard of achievement,” said Jon Peddie, president of Jon Peddie Research. “NVIDIA’s multi-platform compatibility and fully conformant driver support across many operating systems is a testament to the company’s leadership role in Vulkan’s development.”

GeForce gamers will be the first to play the Vulkan version of The Talos Principle, a puzzle game from Croteam that shipped today.

What Is Vulkan?

Vulkan is a low level API that gives direct access of the GPU to developers who want the ultimate in control. With a simpler, thinner driver, Vulkan has less latency and overhead than traditional OpenGL or Direct3D. Vulkan also has efficient multi-threading capabilities so that multi-core CPUs can keep the graphics pipeline loaded, enabling a new level of performance on existing hardware.

Vulkan is the first new generation, low-level API that is cross platform. This allows developers to create applications for a variety of PC, mobile and embedded devices using diverse operating systems. Like OpenGL, Vulkan is an open, royalty-free standard available for any platform to adopt. For developers who prefer to remain on OpenGL or OpenGL ES, NVIDIA will continue to drive innovations on those traditional APIs too.

Who’s Behind Vulkan?

Vulkan was created by the Khronos Group, a standards organization that brings together a wide range of hardware and software companies, including NVIDIA, for the creation of open standard, royalty-freeAPIs for authoring and accelerated playback of dynamic media on a wide variety of platforms and devices. We’re proud to have played a leadership role in creating Vulkan. And we’re committed to helping developers use Vulkan to get the best from our GPUs.

Why You Should Care

Vulkan is great for developers. It reduces porting costs and opens up new market opportunities for applications across multiple platforms. Best of all, the NVIDIA drivers needed to take advantage of Vulkan are already here. On launch day we have Vulkan drivers available for Windows, Linux, and Android platforms. See our Vulkan driver page for all the details.

Here’s what Vulkan will mean for you:

For gamers with GeForce GPUs: Vulkan’s low latency and high-efficiency lets developers add more details and more special effects to their games, while still maintaining great performance. Because a Vulkan driver is thinner with less overhead, application developers will get fewer performance surprises. This translates to smoother, more fluid experiences.

NVIDIA is shipping fully-conformant Vulkan drivers for all GeForce boards based on Kepler or Maxwell GPUs running Windows (Windows 7 or later) or Linux. “We have been using NVIDIA hardware and drivers on both Windows and Android for Vulkan development, and the reductions in CPU overhead have been impressive,” said Oculus Chief Technology Officer John Carmack.

GeForce gamers will be the first to play the Vulkan version of The Talos Principle, a puzzle game from Croteam that also shipped today. “We’ve successfully collaborated with the NVIDIA driver support team in the past, but I was amazed with the work they did on Vulkan,” said Croteam Senior Programmer Dean Sekuliuc. “They promptly provided us with the latest beta drivers so we were able to quickly implement the new API into Serious Engine and make The Talos Principle one of the first titles supporting Vulkan. Smooth!”<

For professional application developers using Quadro: Our Vulkan and OpenGL drivers use an integrated binary architecture that enables the use of GLSL shaders in Vulkan. Developers also have the flexibility to continue using OpenGL or plan a smooth transition from OpenGL to Vulkan to take advantage of Vulkan’s new capabilities. For example, Vulkan’s multi-threaded architecture can enable multiple CPU cores to prepare massive amounts of data for the GPU faster than before. For design and digital content creation applications, this means enhanced interactivity with large models.

For mobile developers using Tegra: We’re making Vulkan available to developers for both Android and Linux. Vulkan will ship alongside OpenGL ES as a core API in a future version of Android. This means that standard Android will have a state-of-the-art API with integrated graphics and compute, ultimately unleashing the GPU in Tegra for cutting-edge vision and compute applications, as well as awesome gaming graphics. Developers can use Vulkan on NVIDIA SHIELD Android TV and SHIELD tablets for Android coding, and Jetson for embedded Linux development.