Model-based design of CANopen systems: mechatronics

Technology Update: Co-existing, multiple disciplines for mechatronic system design hinder the use of software-oriented modeling principles, such as UML, but modern tools may be integrated into a working tool chain. (Part 1 of 2)

Model-based design has become mainstream in the industry, but it has been used mostly for developing individual control functions or devices, not entire control systems. Current mechatronic systems are becoming more complex, and simultaneously, the requirements for quality, time-to-market, and costs have risen. An increasing number of systems are distributed, but development is typically done device by device and without systematic coordination of system structures. Approaches to manage distributed systems with written documents have led to inefficiency and inconsistent interfaces. Inconsistent interfaces have led to situations where it was easier and faster for the designers to write a new software component than reuse an existing one. Another typical occurrence is that significant interface adjustments have to be performed during integration testing of a system. Based on such experiences, there is a demand for standardized and semantically well-formed interfaces between multiple disciplines [16].

In typical mechatronic systems, multiple disciplines co-exist and none of them dominate. The multidisciplinary nature of design work makes it very difficult to utilize the modeling principles dedicated for software-oriented development, such as unified modeling language (UML) or system modeling language (SysML)[1]. Studies show that it is impossible to create a single tool that is optimal for all disciplines; instead, existing state-of-the art tools can be integrated into a working tool chain.

Traditional distributed automation

In a typical distributed system, one function may be divided into several devices, and one device may serve multiple functions. Node-centric development can be difficult because the exact functional distribution is not known prior to development. Application-centric development and simulation also provide limited efficiency because of limited testing capabilities [11]. Software-centric development, without thorough system-level management, leads to serious interface inconsistencies. The old approach to managing communication interfaces is to embed communication descriptions into the application software [5]. Historically, this works with very small systems, where there is only one instance of each type of device. When devices exist more than once in a system, this approach often leads to poor reuse of design artifacts or adoption of configuration management processes.

Model-based designs have become attractive because of the inefficiencies of the existing approaches. Although the requirement management in traditional software development has been document-centric, in some cases the requirements for the next version were collected from the source code of a previous version [18]. It has also been documented that model-based designs can reduce the number of defects and wasted efforts produced by current approaches.

A separate design of the logical and physical structures causes challenges in managing the two parallel models and their connections without inconsistencies and still allowing incomplete models [1]. In addition, if a model-based conceptual design was used, models can be manually converted into code, or control applications can be developed and tested separately, independent of each other. The main motivation for more systematic developments can be found in the assembly and service process, rather than in development, because of their higher significance [3]. Systematic configuration management enables solving serious problems, for example, during system assembly and service [3]. Systematic configuration management is required throughout the development process [18].

Existing modeling approaches

Increasing complexity of the systems requires increasing systematics during development [10]. Most defects found during the last phases of the traditional processes were caused by failures in the requirement acquisition in the early phase of the processes [10] [18]. The validation of specifications to models and model-to-code matching is easier with simulation models [9], and the use of automatic code generation with proven tools makes it possible to automate code verification and move the focus of reviews from code to models. Automatic code generation from simulation models improves the development of especially high-integrity systems [9] [10] [11]. The simulation model is actually an executable specification, from which certain documents can be generated [9] [10] [15] [18]. Higher integrity with lower effort can be achieved by validating the basic blocks and maximizing their reuse [15]. Conformance to corresponding standards helps to achieve required quality [15]. Simulation models can also document interfaces between structural blocks, improving consistency and enabling parallel and co-development, improving overall efficiency [10] [12] [18].

Old processes produce old results [18]; new development approaches, such as a model-based design, improve the design. New processes and tools are often needed to achieve maximum improvements. A new process with an existing, constrained design does not show benefits, but benefits can be found with new and more complex designs. A phase-by-phase approach is required to provide a learning curve. It is also important to be able to keep existing code compatible with the new code generated from models. Design reuse is one of the main things that improve productivity. The systematic management of both interfaces and behavior is mandatory in safety relevant system designs [7]. Instead of using model-based tools as a separate overlay for the existing processes and tools, automated interfaces need to be implemented between tools [18]. Connecting model-based tools with the existing legacy tools may require changes beyond the tools’ built-in capabilities, increasing the effort required to maintain, develop, and upgrade the tool chain.

Modeling tools, scope

The Simulink tool was used in the project because it is the de facto modeling tool in research and industry with open interfaces. It also solves most of the problems found in other modeling languages and approaches [1]. One of the most significant benefits is the support of dynamic simulations. Unlike examples such as executable UML, Simulink models can be used for modeling disciplines other than software. The models can be simply made and based only on behavior. The physical structure can be included into the model by adjusting the hierarchy of the logical model. If required, the models can be developed to cover improved dynamics as well.

Because of the increasing time-to-market and functional safety requirements in machinery automation applications, higher productivity and support for model verification and reuse of designs became significant reasons for using Simulink. Features include linking to the requirement management, model analysis, support for continuous simulation during the design process, testing coverage analysis, and approved code generation capabilities [17]. Using the Simulink models enables efficient reuse of the models for various purposes.

Using IEC 61131-3 programming languages for the evaluation is increasing because they are well standardized. Their use, especially in safety critical implementations, has increased because some of the IEC 61131-3 languages are recommended by functional safety standards [7]. A standardized XML-based code import and export format has been published recently, further improving systematic design processes.

The presented approach is technology independent. CANopen was selected as an example integration framework because the CANopen standard family covers system management processes and information storage. It is supported by many commercial tool chains that can be seamlessly integrated. The management process fulfills the requirements set for design of safety relevant control systems [7] and defines how CANopen interfaces appear in IEC 61131-3 programmable devices [2]. A managed process is required to reach the functional safety targets [7]. There is also a wide selection of various types of off-the-shelf devices on the market that enable efficient industrial manufacturing and maintenance. Device profiles, in particular, help reuse common functions instead of developing them constantly. CANopen also offers extensive benefits in assembly and service when compared to other integration frameworks.

Relevant CANopen issues are reviewed first to enable readers to understand the process consuming the presented communication description. Next, the basic modeling principles are shown. After presenting the modeling principles, the communication interface description in the model and exporting of both application interfaces and behavior followed. Modeling details are not within the scope of this article.

CANopen, modeling

The CANopen system management process defines the interface management through the system’s lifecycle from application interface description to spare part configuration download. The first task in the process is to define application software parameters and signal interfaces as one or more profile databases (CPD) [4]. Next, node interfaces defined as electronic datasheet (EDS) files can be composed of the defined profile databases. The EDS files are used as templates for device configuration files (DCFs). These files are system position-specific and define the complete device configurations in a system. DCFs can be directly used in assembly and service as device configuration storage [19]. In addition to the DCFs, system design tools produce a communication description as a de facto communication database format, which can be directly used in device or system analysis. A process with clearly distinguishable phases improves the resulting quality because a limited number of issues need to be covered in each step of the process [11].

Signals and parameters need to be handled differently [4] because of their nature [14]. Signals are periodically updated and routed between network and applications through the process image [2] [4]. The process image contains dedicated object ranges for variables supporting both directions and the most common data types. The same information can be accessed as different data types. Signals are typically connected to global variables as absolute IEC addresses [2]. Signal declarations include metadata and connection information used for consumer side plausibility and validity monitoring. Metadata is used for plausibility checking and access path declaration. All the relevant application development information is automatically exported from the CANopen project to the software project of each application programmable device. Monitoring, troubleshooting, and rapid control prototyping (RCP) can be supported by the exported communication description. The completed CANopen project automatically serves the device configuration in assembly and service.

The process image located in the object dictionary also serves communication between the functions or applications inside the same device [8] and can be shared by different fieldbuses [6]. Software layers above the process image are not necessarily required with CANopen. The internal object access type can be defined as RWx (read and write access) to enable bidirectional access inside the producer device. The external access type should be defined as RWR (read write on process input) to enable information distribution to the network. Access type RWW (read write on process output) should always be used for incoming signals, as they can be shared by multiple applications.

Parameters are stationary variables controlling the behavior of a software; their values are changed sporadically and in CANopen systems typically are stored locally in each device [2] [4] [14].

Parameters of application programmable CANopen devices must always be located in a manufacturer-specific area of the object dictionary. The only exception occurs if device profile compliant behavior is included. Then parameters must be located according to the corresponding device profile. It is recommended to organize application-specific parameters as groups separated from the platform-specific objects; standards do not define the organization of parameter objects. Different approaches to access parameters exist, for example, linking global variables to objects or using access functions or function blocks.