Posts Tagged ‘AI’

Embedded systems have been a major part of modern life for decades, while artificial intelligence (AI) has recently experienced a rapid growth in popularity. The inherent synergies between these two technologies make them an ideal combination for a number of applications. Embedded systems can produce a wealth of data points and act based upon real-time inputs, and AI thrives on parsing through aggregated data and can be used to control embedded systems in real time.

Given that, we can expect a surge of innovations that combine these two powerful technologies over the next five to 10 years.

Do You Want AI with That?

Embedded systems are found throughout retail and quick service restaurant (QSR) locations everywhere. Point of sale (POS) systems, kiosks, and barcode scanners are all examples of embedded systems commonly found at these locations. Recently, AI has begun to make its presence felt in the retail and QSR sector as well. For example, CaliBurger, a California-based hamburger chain, has recently introduced two very interesting combinations of AI and embedded systems at its locations:

A $100K burger-flipping robot outpaced humans at first but was put back on the job and “moves like a ninja.” (Source: www.pasadenanow.com)

Flippy the robotic kitchen assistant—CaliBurger partnered with Miso Robotics to create Flippy, a robot that helps prepare hamburgers (Figure 1). Flippy is promoted as the world’s first burger flipping robot and is currently in use at one of the Pasadena CaliBurger locations.

A facial recognition loyalty program— According to CNBC, CaliBurger launched a face-based loyalty program at the end of 2017. This program allows users to pay by simply smiling at a screen. By combining touchscreen technology at ordering kiosks with the facial recognition payment system, CaliBurger can reduce labor costs. The two innovations being piloted at CaliBurger may serve as an indicator of what may be to come for the retail and QSR sector if firms continue to combine AI and embedded systems.

Industry 4.0: Why Analytics’ Influence Will Grow

Industry 4.0 has been a popular buzzword over the last few years. With grand visions of smart factories and automation, proponents of Industry 4.0 explain how it will fundamentally shift the way we do business, but what exactly will this technology driven industrial revolution consist of? The Boston Consulting Group has identified nine technologies that make up what is commonly referred to as Industry 4.0: augmented reality, big data and analytics, autonomous robots, simulation, horizontal and virtual system integration, the Industrial Internet of Things, cybersecurity, the cloud, and additive manufacturing. As we transition into the next decade, we can expect to see firms continue to iterate and innovate based upon these technologies, eventually leading to an overall paradigm shift in how manufacturing processes work. AI and embedded systems will work together to play a large part in driving this change.

The Industrial Internet of Things (if you need a quick overview of the Internet of Things, check out The Internet of Things: Living in a World of Connected Devices) will contain a variety of embedded systems that are connected to a network. These embedded systems will perform a variety of tasks on the factory floor ranging from monitoring temperature to controlling mechanical parts. The embedded devices will generate data that serves as input to “big data” databases and respond based on inputs and insights from the analytics programs run against this data. For example, if a temperature sensor reports a device has begun to overheat, logic could be triggered to shut down or scale back that device. While this sort of automation is helpful, the insights and advancements that the analytics will make possible may become even more influential. All the sensors and devices on the factory floor will generate data on a scale a human would be unable to process, however, the business intelligence and analytics software firms will be able to process this data in a way that leads to insights and pattern identifications that could optimize operations and enable further automation.

Smarter Homes, Smarter Cities

Intelligent devices designed to make life at home more convenient and connected have flooded the marketplace over the last few years. Many smart home devices are actually embedded systems connected to the network. Examples of these devices include:

Ecobee4—The Ecobee4 is a smart thermostat that comes with Amazon’s Alexa virtual assistant built in. In addition to helping automate the HVAC system in a home, the added Alexa functionality enables it to do things like
find recipes and order groceries based on voice commands.

Video Doorbells—These video doorbells from Ring can stream video from the front door, have built-in motion sensors, and can provide two-way talk, all of which can be accessed from a cell phone app.

Smart Refrigerators—These refrigerators from Samsung can connect with a smartphone app and allow users to share photos on the fridge’s screen, manage shopping lists, and stream TV or music. While there are some legitimate security concerns that need to be resolved as the smart home movement progresses (for a quick primer on how to secure your smart embedded devices, check out this article on the topic), we expect the market to continue to grow and AI to play a big role in it over the next five to ten years. As this market continues to mature, we will see more and more smart home devices available that improve the convenience level of everyday life. With advancements in the AI technology that will tie these different systems together, expect to see more automation and predictive analytics brought to users. For example, an AI program could learn your shopping habits and add items to your grocery list automatically based on the current contents of your fridge.

Smart cities are a concept similar to smart homes but scaled to the metropolitan level. Sensors, cameras, and smart meters are some of the many embedded devices you find in smart cities. These devices are coupled with business intelligence and analytics software that help cities make better decisions aimed at improving the quality of life for residents while reducing cost and minimizing waste. The smart city movement already boasts a number of success stories. For example, according to the Smart Cities Council, the City of Calgary was able to use a data-driven approach to predict and mitigate floods. Moving forward, AI may be able to push the smart cities movement even further into the future. At the heart of AI technology is the ability to capture data inputs and “learn” from those inputs. A city full of embedded devices will be able to provide a wealth of data allowing AI to help cities solve real world problems, potentially before the problem even occurs.

Tackling Healthcare Challenges

Embedded systems are just as ubiquitous in healthcare. Embedded devices are integral parts of a variety of healthcare systems ranging from hospital medical equipment to personal activity monitors that encourage a healthy lifestyle. Over the last few years, AI has also made a huge impact on healthcare. The healthcare industry generates a wealth of data that can provide insights and lead to better diagnoses if we are able to process it all, and AI is helping to do just that. Products like IBM Watson Health use AI to parse healthcare data and help tackle some of the major challenges facing healthcare professionals today. AI can assist in areas ranging from prevention to early detection to diagnosis. Watson in particular is working on solving complex problems related to cancer, diabetes, and drug discovery. Moving forward, we can expect to see more integration of AI with embedded systems that can generate valuable healthcare data, helping healthcare professionals improve outcomes for patients and helping consumers make better health related decisions.

Conclusion

The ability of embedded systems to interact with the real world and produce a large amount of data points makes them well suited for use in applications where AI needs to interact with the real world. Over the next decade, you can expect to see these
two technologies enable innovations in retail, Industry 4.0, smart homes, smart cities, and healthcare.

Author. Gil Ben-Dov is a 20-year technology veteran with experience at industry leaders Cisco Systems, ResMed, and Air Liquide. He currently serves as the CEO for
Total Phase and has held the role since October 2014. Prior to that appointment, Mr. Ben-Dov served as VP/General Manager from 2013 to 2014. He was previously director of
sales, joining Total Phase in 2012.

Artificial Intelligence (AI), long relegated to the realm of science fiction and more recently to high-powered computing machinery is slowly finding its way to lower-end embedded hardware. For about $100 USD, it is now possible to acquire the all necessary hardware and software to build a customized, vision-based AI solution. Last month Google released their AIY Vision Kit that is powered by the Intel® Movidius
MA2450 Vision Processing Unit (VPU). This is the same embedded hardware that has powered the Intel’s Neural Compute Stick USB platform, Google’s Project Tango, and more recent generations of DJI-branded drones. Put simply, VPUs are customized microprocessor hardware built to handle the specialized machine learning algorithms that enable onboard machine vision processing. These algorithms include convolutional neural networks (CNN) and scale-invariant feature transform (SIFT). VPUs are necessary for efficiently handling neural network computer vision algorithms just as Graphical Processing Units (GPU) were developed apart from Central Processing Units (CPU) to provide better handling of graphics intensive tasks.

AIY Vision Kit’s do-it-yourself assembly (Source: Google)

The onboard processing is a key distinction. Instead of relying on cloud-based processing of locally captured images, the $45 AIY Vision Kit’s VisionBonnet (the plug-in board that contains the MA2450 and associated circuitry) with its neural network algorithms can process imagery without the need for Internet connectivity. Sometimes you do not want an IoT device to require connectivity for processing data. No requirement for constant connectivity means some smarts are in the IoT device itself, which removes potentially unacceptable performance delays and security risks associated with the images traversing the Internet. This concept is sometimes referred to as “edge computing” or “fog computing .” Going to the cloud for processing resources is not always desirable. Pushing computation down to as close as the sensor node as possible is preferable in many cases, especially for applications such as autonomous vehicles. In this way, latency is improved and connectivity is reserved for things like downloading an improved training model.

The current iteration of the AIY Vision Kit is built specifically for the Raspberry Pi Zero W Linux-based single board computer. The kit supports two deep machine learning frameworks including Google’s own TensorFlow and Caffe from Berkley’s AI Research Lab. It can handle 30 frames per second of image processing. From a practical perspective, the AIY Vision Kit offers starry-eyed startups unprecedented
capability in an extremely affordable and hackable package. The VisionBonnet comes with three pre-built neural network models. The first is a model that can both detect faces and determine the emotion emanating from that face. The next relies on MobileNet models (a suite of open source computer vision models built for TensorFlow running on resource constrained embedded devices) to recognize thousands of
different objects. Lastly, there is a model that can differentiate between humans, dogs, and cats. Google has released the TensorFlow source code for the models and a compiler for the intrepid innovators who wish to tweak the models or develop their own. From the product prototyping and development perspective, the architecture of the AIY Vision Kit allows for very powerful yet straightforward interfacing. On the hardware side, the Raspberry Pi Zero W has four general purpose input/output (GPIO) that are available for interacting with external sensors and actuators. In addition, from a software point of view, the VisionBonnet can be interacted with using the increasingly popular
Python programming language.

Over time, we have seen technology evolve rapidly due in part to falling prices in conjunction with increasing capability. Part of that Moore’s Law story is also making affordable tech accessible to those who work in low-end, low-cost embedded hardware such as Arduino and
Raspberry-Pi. Affordable equates to accessible. Accessibility leads to opportunity. Rock on.