The automotive industry is past the point of talking about the disruption coming from demand for electric vehicle powertrains or the potential of autonomous vehicles, and well on its way to responding to those disruptions thanks to advanced engineering tec

The market's visibility to early quality issues has never been greater and will continue to accelerate. The extensive awareness creates opportunities for the prepared enterprise, and substantial risks for those slow to react.

By Brian Albright

March 1, 2019

Artificial intelligence (AI) is becoming a common part of the technology landscape, in hardware and software, including the use of machine learning and deep learning. Design engineers will be challenged to use both deep learning and machine learning in their own design processes to more quickly explore the design space and optimize final designs, as well as incorporate deep learning capabilities into their product designs for self-driving cars, smart medical devices and other goods.

Deep learning, in particular, is taking hold in the engineering space in various ways. The algorithms use mathematical models in a neural network to identify patterns in data and respond to them. This can be applied for pattern recognition in images, videos, sounds, text and any other type of data. In design, integrating deep learning and design processes can help bring better products to market more quickly.

“Deep learning and design engineering can learn a lot from each other and generate surprising outcomes.”

— Leslie Nooteboom, Humanising Autonomy

Design tools can be applied to deep learning processes to help gather and annotate data, and develop real-world applications, according to Leslie Nooteboom, CDO and co-founder of Humanising Autonomy. Similarly, deep learning can be applied to improve existing products and develop new ones.

“In the development phase, deep learning can be used to develop and experiment with intelligent aspects to the product. Deep learning can then assist in evaluating the data that comes from the implementation,” Nooteboom says.

His company, Humanising Autonomy, is building computer vision models to predict the “full breadth of human behavior” to improve the safety and efficiency of self-driving cars and other systems. “These kinds of solutions have definitely helped the early stage development of many new products,” Nooteboom says. “A more industry-level collaboration between deep learning engineers and design engineers is much less common, which is a shame. Deep learning and design engineering can learn a lot from each other and generate surprising outcomes.”

Design Goes Deep

Deep learning is already being used throughout the design process, from researching for inspiration to application programming interfaces that allow designers to add speech recognition to a prototype. “We are currently just scratching the surface of what deep learning can do,” says Naji El Masri, CEO of Noesis Solutions. “Right now, we are using it to provide accurate models that enable engineering teams to better understand the behavior of a system. This approach is very effective, especially when handling massive amounts of data that would be unmanageable with any conventional approach. Deep neural networks (DNNs) are capable of much more than this, however. Their capability to reproduce the behavior of complex, non-linear systems with almost arbitrary accuracy … enables a large number of applications.”

DNNs could enable a much more efficient integration of computationally expensive component models into system simulation models by replacing these component models with high-fidelity response surface models (RSMs) or functional mock-up units (FMUs), according to El Masri.

Other potential applications include the analysis of complex computational fluid dynamics (CFD) images to locate specific features such as turbulence, or the evaluation of a high number of different designs for feasibility.

At ANSYS, CTO Prith Banerjee says that advanced simulation of use cases can help generate the data needed for deep learning—for self-driving cars, for instance, simulation helps generate the possible scenarios that a vehicle might encounter. At the same time, AI and machine learning can improve simulations by automating some of the simulation management and generation.

Deep learning can also automate defect detection in design and a multitude of other processes, says Avinash Nehemiah, principal product manager, Computer Vision and Automated Driving, at MathWorks. “Really you can use this any place where there is enough data that you can learn a particular task,” Nehemiah says. “We’re seeing users apply deep learning and have been astounded by the variety of applications in design engineering workflows.”

Engineers are also incorporating deep learning into products they design, using speech recognition, language processing and imaging processing. Banerjee says that ANSYS customers are using deep learning to help improve Internet of Things (IoT) applications by improving the accuracy of predictive maintenance and analytics in their products. “You can use simulation to complement machine learning, and improve the quality of your predictions,” he says.

AI is also an important part of how augmented and virtual reality (AR/VR) systems work, particularly in applications where engineers or clients will interact with realistic renders of a design, or where these simulated environments can be used to train deep learning models. The technology is used to enhance tasks like eye tracking or hand movement, for examples.

Engineering software tools are increasingly incorporating deep learning technology to enhance the design process and to incorporate AI into new products. NVIDIA’s Isaac Gem suite, for example, can be used for developing solutions for perception, navigation, manipulation and control in robotics systems. Developers can train and test robotics software using realistic virtual simulation environments.

MathWorks added more deep learning enhancements to its latest releases of MATLAB and Simulink for designing and implementing deep neural networks and AI development. The Deep Learning Toolbox can be used to train deep learning networks for computer vision, signal processing and other applications. A Deep Network Designer app uses a drag-and-drop interface to help engineers build, visualize and edit deep learning networks.

“If an engineer or data scientist wants to detect specific regions, we have apps for labeling images, video and audio signals,” Nehemiah says. “We’ve also provided a way for engineers to graphically try out different machine learning techniques graphically without programming.”

Deep Learning Data Challenges

Combining deep learning and design requires an investment in training and resources, as well as strong, well-defined use cases to identify suitable tasks for a neural network to “learn.”

“On the other hand, some effort might be required to make human expertise and machine models work together efficiently,” El Masri says. “This is especially true in a design engineering context where human experts are extremely knowledgeable.”

Nooteboom says creating a mutual understanding of both deep learning and design engineering fields is another challenge. “[Deep learning] engineers often dismiss the design process for being too vague and designers often neglect the engineering aspect for its complexity. It will require effort from deep learning engineers to learn about design processes and from design engineers to build a basic understanding of deep learning, but their interaction will most definitely improve both fields of skill,” he says.

“There are some workflow-related challenges,” Banerjee adds. “How do you integrate machine learning data and simulation data? We are working with our customers to solve some of these problems, particularly workflow issues that they encounter.”

One of the biggest obstacles to successfully leveraging deep learning is making sure that you have the correct data to “feed” the neural network. Each application will have different requirements for data gathering, filtering and labeling. Most applications will require significant, time-intensive data preparation.

Nooteboom uses an example of a deep learning model that is supposed to recognize when someone driving a car is upset. “There is no dataset of that kind of behavior readily available, so you would have to create your own,” he says. “Design engineers have a much wider palette of tools to understand how to gather that data. Making a customer journey allows you to understand the full range of emotions someone will go through when driving a car. Finding human behaviors that might indicate whether someone is upset is also important for annotation. And not to be forgotten, understanding what motivates someone to give access to this type of data is essential when following strict privacy regulations.”

Data bias is another problem that has cropped up a lot when addressing applications like facial recognition—systems are trained on a relatively narrow subset of faces when it comes to race, ethnicity, gender or shape. Designers have to ensure they are populating the data set with the widest, most representative information.

“On top of this, there is the wrong belief that deep learning can ingest and process whatever kind of data is thrown at it,” El Masri says. “While there is much more flexibility than conventional machine learning, this is still not true. Significant effort should be put into cleaning and formatting data so it is useful to train a deep neural network. However, this is much more software engineering than machine learning.”

Data labeling can be laborious and expensive. “The other challenge is the complex math involved,” Nehemiah says. “You have millions of operations that can seem like a black box, so it’s important to get an understanding of what is causing the machine learning tool to predict what it is predicting and why. That brings value to the evaluation tools.”

Banerjee points out that machine learning and deep learning techniques can actually be used to help improve data gathering. “People typically do machine learning based on real operational data, but in applications like self-driving cars, you would need to drive hundreds of millions of miles to get all the potential data you would need,” Banerjee says. “We can use simulation to model the real-world systems.”

Invest in Data Science, Deep Learning Expertise

Deep learning, combined with unlimited computing and 3D printing, will play a key role in enabling generative design solutions, as well as determining speculative design projects that attempt to anticipate how products will need to operate in the future.

As more AI-based solutions emerge in the field, Banerjee says companies will need to invest in data science expertise to effectively use deep learning in the design process. “Our customers are already hiring data science experts,” he says. “It takes that kind of ecosystem to make it work. Our customers need to be knowledgeable and we need to be knowledgeable.

“Hiring deep learning engineers can be a challenge, as the field has only recently become so big,” Nooteboom says. “The level of expertise required depends on the application. If the engineer will have to develop new model architectures, it will require several years of experience. If the engineer will need to retrain existing models with new data, less seniority is required. On the design engineer side, an understanding of the deep learning development process and its limitations is the least they should be capable of.”

And companies should also be prepared to invest in new research to make sure that they not only have a sufficient knowledge base, but know where to apply the technology. “To exploit deep learning, it is not enough to implement approaches already explored in the literature (that might not even exist for the specific use case); actual research must be put in place,” El Masri says. “Research in machine learning should be the first target for a company looking to leverage deep learning in engineering.”