In industry the open standard EtherCAT* is well established as a real-time fieldbus for largely distributed and synchronised systems. Open source solutions for the bus master have been first introduced in scientific installations by Diamond Light Source** and PSI using EtherCAT hardware for digital and analog I/Os. The European Spallation Source (ESS) decided to establish open source EtherCAT systems for mid-performance data acquisition and motion control on accelerator applications. In this contribution we present the motion control software package ECMC developed at the ESS using the open source Etherlab*** master to control the EtherCAT bus. The motion control interfaces with a model 3 driver to the EPICS motor record supporting it's functionalities like positioning, jogging, homing and soft/hard limits. Advanced functionalities supported by ECMC are full servo-loop feedback, a scripting language for custom synchronisation of different axes, virtual axes, externally triggered position capture and interlocking. On the example of prototyping a 2-axis wire scanner we show a fully EPICS integrated application of ECMC on different EtherCAT and CPU hardware platforms.* http://www.ethercat.org ** R. Mercado, I. J. Gillingham, J. H. Rowland, K. Wilkinson "Integrating EtherCAT based IO into EPICS at Diamond." ICALEPCS 2011, Grenoble 2011 *** http://www.etherlab.org

The real-time control systems for the Gemini Telescopes were designed and built in the 1990s using state-of-the-art software tools and operating systems of that time. These systems are in use every night, but they have not been kept up-to-date and are now obsolete and also very labor intensive to support. This led Gemini to engage in a major effort to upgrade the software on its telescope control systems. We are in the process of deploying these systems to operations, and in this paper we review the experience and lessons learned through this process and provide an update on future work on other obsolescence management issues.

The SwissFEL beam-synchronous data-acquisition system is based on several novel concepts and technologies. It is targeted on immediate data availability and online processing and is capable of assembling an overall data view of the whole machine, thanks to its distributed and scalable back-end. Load on data sources is reduced by immediately streaming data as soon as it becomes available. The streaming technology used provides load balancing and fail-over by design. Data channels from various sources can be efficiently aggregated and combined into new data streams for immediate online monitoring, data analysis and processing. The system is dynamically configurable, various acquisition frequencies can be enabled, and data can be kept for a defined time window. All data is available and accessible enabling advanced pattern detection and correlation during acquisition time. Accessing the data in a code-agnostic way is also possible through the same REST API that is used by the web-frontend. We will give an overview of the design and specialities of the system as well as talk about the findings and problems we faced during machine commissioning.

For many years, we have used a commercial real-time operating system to run EPICS on VME controller boards. However, with the availability of EPICS on Linux it became more and more charming to use Linux not only for PCs, but for VME controller boards as well. With a true multi-process environment, open source software and all standard Linux tools available, development and debugging promised to become much easier. Also the cost factor looked attractive, given that Linux is for free. However, we had to learn that there is no such thing as a free lunch. While developing EPICS support for the VME bus interface was quite straight forward, pitfalls waited at unexpected places. We present challenges and solutions encountered while making Linux based real-time VME controllers the main control system component in SwissFEL.

The Beam Synchrotron Radiation Telescope (BSRT) is routinely used for estimating the transverse beam size, pro'le and emittance in the LHC; quantities playing a crucial role in the optimisation of the luminosity levels required by the experiments. During the 2017 LHC run, the intensi'ed analog cameras used by this system to image the beam have been replaced by GigE digital cameras coupled to image intensi'ers. Preliminary tests revealed that the typically used sub-image rectangles of 128×128 pixels can be acquired at rates of up to 400 frames per second, more than 10 times faster than the previous acquisition rate. To address the increase in CPU workload for the image processing, new VME CPU cards (Intel 4 core/2.5GHz/8GB RAM) are envisaged to be installed (replacing the previous Intel Core 2 Duo/1.5GHz/1GB RAM). This paper focuses on the software changes proposed in order to take advantage of the multi-core capabilities of the new CPU for parallel computations. It will describe how beam profile calculations can be pipe-lined through a thread pool while ensuring that the CPU keeps up with the increased data rate. To conclude, an analysis of the system performance will be presented.

Funding:This work was supported by the Korean Ministry of Science ICT & Future Planning under the KSTAR project.In fusion experiment, real-time network is essential to control plasma real-time network used to transfer the diagnostic data from diagnostic device and command data from PCS(Plasma Control System). Among the data, transmitting image data from diagnostic system to other system in real-time is difficult than other type of data. Because, image has larger data size than other type of data. To transmit the images, it need to have high throughput and best-effort property. And To transmit the data in real-time manner, the network need to has low-latency. RTPS(Real Time Publish Subscribe) is reliable and has Quality of Service properties to enable best effort protocol. In this paper, eProsima Fast RTPS was used to implement RTPS based real-time network. Fast RTPS has low latency, high throughput and enable to best-effort and reliable publish and subscribe communication for real-time application via standard Ethernet network. This paper evaluates Fast RTPS about suitability to real-time image data transmission system. To evaluate performance of Fast RTPS base system, Publisher system publish image data and multi subscriber system subscribe image data.* giilkwon@nfri.re.kr, Control team, National Fusion Research Institute, Daejeon, South Korea

In Korea Superconducting Tokamak Advanced Research (KSTAR), various diagnostics systems were operated from the first plasma in 2008. Many diagnostic devices have been installed for measuring the various plasma properties such as plasma current, magnetic current, electron density, electron temperature, impurity, and so on. The DAQ system for measuring the various plasma properties were developed with various form factor digitizer such as VME, CPCI, PXI, VXI. and PCIe. These complicated form factors installed on KSTAR have difficulties with hardware management, software management and performance upgrades. In order to control real-time systems using several diagnostic signals, the real-time control system is required to share the data without delay between the diagnostic measurement system and the real-time control system without branch one signal. Therefore, we developed the Multifunction Control Unit (KMCU) as the standard control system MTCA.4 form-factor and implemented the various diagnostic DAQ system using KMCU V2, that is KMCU-Z30. This paper will present the implementation of KSTAR diagnostic DAQ systems configured with KMCU based on MTCA.4 and their operating results.

The FAIR General Machine Timing system has been in operation at GSI since 2015 and significant progress has been made in the last two years. The CRYRING accelerator was the first machine on campus operated with the new timing system and serves as a proving ground for new control system technology to this day. A White Rabbit (WR) network was set up, connecting parts of the existing facility. The Data Master was put under control of the LSA physics core. It was enhanced with a powerful schedule language and extensive research for delay bound analysis with network calculus was undertaken. Several form factors of Timing Receivers were improved, their hard and software now being in their second release and subject to a continuous series of automated long- and short-term tests in varying network scenarios. The final goal is time-synchronization of 2000-3000 nodes using the WR Precision-Time-Protocol distribution of TAI time stamps and synchronized command and control of FAIR equipment. Promising test results for scalability and accuracy were obtained when moving from temporary small lab setups to CRYRING's control system with more than 30 nodes connected over 3 layers of WR Switches.

Funding:This work is supported by National Natural Science Foundation of China(61333003) and Science and Technology Development Foundation of China Academy of Engineering Physics (14-FZJJ-0422).Rapidly changing demands for interoperability among heterogeneous systems leads to a paradigm shift from pre-defined control strategies to dynamic customization within many automation systems, e.g., large-scale scien-tific facilities. However, today's mass systems are of a very static nature. Fully changing the control process requires a high amount of expensive manual efforts and is quite error prone. Hence, flexibility will become a key factor in the future control systems. The adoption of web services and Service-Oriented Architecture (SOA) can provide the requested capability of flexibility. Since the adaptation of SOAs to automation systems has to face time-constrained requirements, particular attention should be paid to real-time web services for deterministic behaviour. This paper proposes a novel framework for the integration of a Time-Constrained SOA (TcSOA) into mass automation systems. Our design enables service encapsulation in filed level and evaluates how real time technologies can be synthesized with web services to enable deterministic performance.

The KSTAR plasma control system has very powerful monolithic software architecture that has dedicated centralized system architecture. However, due to increasing of real time functionality on distributed local control system, we need a flexible high-performance software framework. A new real time core engine program inherited design philosophy from the Very Large Telescope (VLT) control software. A new Tool for Advanced Control (TAC) engine was based on C++ standard run on Linux. It is a multithreaded core engine program for execution of real time application. The elemental building blocks are chained together to form a control application."Design and implementation of a standard framework for KSTAR control system", FED, Volumes 89, 2015 "Designing a common real-time controller for VLT applications", Proc. of SPIE Vol. 5496

D.C. Weber
University of Zurich, University Hospital, Zurich, Switzerland

Funding:This work is supported by the Giuliana and Giorgio Stefanini Foundation.Patient treatments in scanned proton therapy exhibit dead times, e.g. when adjusting beamline settings for a different energy or lateral position. On the one hand, such dead times prolong the overall treatment time, but on the other hand they grant possibilities to (retrospectively) validate that the correct amount of protons has been delivered to the correct position. Efforts in faster beam delivery aim to minimize such dead times, which calls for different means of monitoring irradiation parameters. To address this issue, we report on a real-time beam monitoring system that supervises the proton beam position and current during beam-on, hence while the patient is under irradiation. For this purpose, we sample 1-axis Hall probes placed in beam-scanning magnets and plane-parallel ionization chambers every 10 μs. FPGAs compare sampled signals against verification tables - time vs. position/current charts containing upper and lower tolerances for each signal - and issue interlocks whenever samples fall outside. Furthermore, we show that by implementing real-time beam monitoring in our facility, we are able to respect patient safety margins given by international norms and guidelines.

During the optimization phase of the FERMI Free Electron Laser (FEL) to deliver the best FEL pulses to users, many machine parameters have to be carefully tuned, like e.g. the seed laser intensity, the dispersion strength, etc. For that purpose, a new python-based acquisition tool, called REALTA (Real Time Acquisition program), has been developed to acquire various machine parameters, electron beam properties and FEL signals on a shot-by-shot basis thanks to the real time capabilities of the TANGO control system. The data are saved continuously during the acquisition in a HDF5 file. The pyDART (Python Data Analysis Real Time) program is the post-processing tool that enables a fast analysis of the data acquired with REALTA. It allows to study the correlations and dependences between the FEL and electron beam properties and the machine parameters. In this work, we present the REALTA and pyDART toolkit developed for the FERMI FEL.

The European Spallation Source (ESS) is a multi-disciplinary research facility based on what will be the world's most powerful-pulsed neutron source. The Integrated Control System Division (ICS) is responsible of defining and providing control systems for the ESS facility. This control system will be based on the EPICS and it must be high performance, cost-efficient, safe, reliable and easily maintainable. At the same time there is a strong need for standardization. To fulfill these requirements ICS has chosen different hardware platforms, like MicroTCA, PLC, EtherCAT, etc. EtherCAT, a Ethernet-based real-time fieldbus will be analyzed, and different questions will be answered: -Why has EtherCAT been chosen? -In which cases is it deployed? -How is it integrated into EPICS? -What is the installation process? Along with data acquisition purposes, the ESS Motion Control and Automation Group decided to use EtherCAT hardware to develop an Open Source EtherCAT Master Motion Controller, for the control of all the actuators of the accelerator within the ESS project. Hence, an overview of the open Source Motion Controller and its integration in EPICS will be also presented.

The civil engineering activities in the framework of the High Luminosity LHC project, the Geneva GEothermie 2020 and the continuous monitoring of the LHC civil infrastructures triggered the need for the installation of a seismic network at CERN. A 24 bits data acquisition system has been deployed in 3 places at CERN: ATLAS, CMS and the Prévessin site. The system is sending all the raw data to the Swiss Seismological Service and performs FFT on the fly to be stored in the LHC database. The system has shown a good sensitivity of 10-16 (m/s)2/Hz at 1 Hz.

Today's front-end controllers, which are widely used in CERNs controls environment, feature CPUs with high clock frequencies and extensive memory storage. Their specifications are comparable to low-end servers, or even smartphones. The Java Virtual Machine (JVM) has been running on similar configurations for years now and it seems natural to evaluate the behaviour of JVMs on this environment to characterize if Firm or Soft real-time constraints can be addressed efficiently. Using Java at this low-level offers the opportunity to refactor CERNs current implementation of the device/property model and to move away from a monolithic architecture to a promising and scalable separation of the area of concerns, where the front-end may publish raw data that other layers would decode and re-publish. This paper presents first the evaluation of Machine Protection control system requirements in terms of real-time constraints and a comparison of the performance of different JVMs regarding these constraints. In a second part, it will detail the efforts towards a first prototype of a minimal RT Java supervision layer to provide access to the hardware layer.

CERN's Data Interchange Protocol (DIP)* is a publish-subscribe middleware infrastructure developed at CERN to allow lightweight communications between distinct industrial control systems (such as detector control systems or gas control systems). DIP is a rudimentary data exchange protocol with a very flat and short learning curve and a stable specification. It also lacks support for access control, smoothing or data archiving. This paper presents a mechanism which has been implemented to keep track of every single publisher or subscriber node active in the DIP infrastructure, along with the DIP name servers supporting it. Since DIP supports more than 55,000 publications, regrouping hundreds of industrial control processes, keeping track of the system activity requires advanced visualization mechanisms (e.g. connectivity maps, live historical charts) and a scalable web-based interface** to render this information is essential.* W. Salter et al., "DIP Description" LDIWG (2004) https://edms.cern.ch/file/457113/2/DIPDescription.doc ** B. Copy et al., "MOPPC145" - ICALEPCS 2013, San Francisco, USA

The Front-End Software Architecture (FESA) framework is the basis for most real-time software development for accelerator control at CERN. FESA designs are defined in an XML document which is validated against a schema to enforce framework constraints, and are used to automatically generate C++ boilerplate code in which the developer can then implement specific code. Design files can rapidly grow in complexity making the overview of the resulting system almost impossible to understand. One way to overcome this is to benefit from a graph-based representation of the design, with XML fragments summarized into logical blocks and association between the blocks depicted by arrows. As the intricacy of the graph is analogous to a potential complex design, it is also essential to provide an interactive Graphical User Interface (GUI) for parameterising and editing the graph generation in order to fine-tune a simpler and cleaner illustration of a FESA design. This paper describes such a GUI (FESA Graph Editor) and outlines how it benefits the design and documentation process of the FESA-design-document.

For normally sighted developers it is hard to imagine how the user interface is going to look to a color blind person. Our purpose is to draw attention to people with color blindness and to consider their color vision. For that, this paper presents the integration of color blindness simulators into the development process of user interfaces. At the end we discuss the main contributing factors.