On 21st of June 2016 the MAX IV Laboratory was inaugurated in the presence of the officials and has welcome the first external researchers to the new experimental stations. The MAX IV facility is the largest and most ambitious Swedish investment in research infrastructure and designed to be one of the brightest source of X-rays worldwide. The current achievements, progress, collaborations and vision of the facility will be described from the perspective of the control and IT systems.

The ITER-CERN collaboration agreement initiated the development of a PROFINET communication interface which may replace the WorldFIP interface in non-radiation areas. The main advantage of PROFINET is a simplified integration within the CERN controls infrastructure that is based on Programmable Logic Controllers (PLCs). CERN prepared the requirements and subcontracted the design of a communication card prototype to the Technical University of Bern. The designed PROFINET card prototype uses the NetX Integrated Circuit (IC) for PROFINET communication and a FPGA to collect the electrical signals from the back-panel (electrical signals interface for instrumentation conditioning cards). CERN is implementing new functionalities involving programming, automation engineering and electronics circuit design. The communication between the card and higher layers of control is based on the OPC UA protocol. The configuration files supporting new types of instrumentation cards are being developed and are compatible with the SIEMENS SIMATIC automation environment. It is worth to mention that all required data calculations and protocol handling are performed using a single netX50 chip.

MARWIN is a mobile autonomous robot platform designed for performing maintenance and inspection tasks alongside the European XFEL accelerator installation in operation in Hamburg, Germany. It consists of a 4-wheel drive chassis and a manipulator arm. Due to the unique Mecanum drive technology in combination with the manipulator arm the whole robot provides three degrees of freedom. MARWIN can be operated in a pre-configured autonomous as well as a remotely controlled mode. Its operation can be supervised through various cameras. The primary use case of MARWIN is measuring radiation fields. For this purpose MARWIN is equipped with both a mobile Geiger-Mueller tube mounted at the tip of the manipulator arm and a stationary multi-purpose radiation detector attached to the robot's chassis. This paper describes the mechanical and electrical setup of the existing prototype, the architecture and implementation of the controls routines, the strategy implemented to handle radiation-triggered malfunctions, and the energy management. In addition, it reports on recent operations experiences, envisaged improvements and further use cases.

ALMA is composed of many hardware and software systems each of which must be properly functioning to ensure the maximum efficiency. Operators in the control room, follow the operational state of the observatory by looking at a set of non-homogeneous panels. In case of problems, they have to find the reason by looking at the right panel, interpret the information and implement the counter-action that is time consuming so after an investigation, we started the development of an integrated alarm system that takes monitor point values and alarms from the monitored systems and presents alarms to operators in a coherent, efficient way. A monitored system has a hierarchical structure modeled with an acyclic graph whose nodes represent the components of the system. Each node digests monitor point values and alarms against a provided transfer function and sets its output as working or non nominal, taking into account the operational phase. The model can be mapped in a set of panels to increase operators' situation awareness and improve the efficiency of the facility.

The real-time control systems for the Gemini Telescopes were designed and built in the 1990s using state-of-the-art software tools and operating systems of that time. These systems are in use every night, but they have not been kept up-to-date and are now obsolete and also very labor intensive to support. This led Gemini to engage in a major effort to upgrade the software on its telescope control systems. We are in the process of deploying these systems to operations, and in this paper we review the experience and lessons learned through this process and provide an update on future work on other obsolescence management issues.

Funding:Jiangmen Underground Neutrino Observatory(JUNO) ExperimentThe Jiangmen Underground Neutrino Observatory (JUNO) is the second phase of the Daya Bay reactor neutrino experiment. The detector of the experiment was designed as a 20k ton LS with a inner diameter of 34.5 meters casting material acrylic ball shape. Due to the gigantic shape of the detector there are approximate 40k monitoring point including 20k channels of high voltage of array PMT, temperature and humidity, electric crates as well as the power monitoring points. Since most of the DCS of the DayaBay was developed on the framework based on LabVIEW, which is limited by the operation system upgrade and running license, the framework migration and upgrade are needed for DCS of JUNO. The paper will introduce the new framework of DCS based on EPICS (Experimental Physics and Industrial Control System). The implementation of the IOCs of the high-voltage crate and modules, stream device drivers, and the embedded temperature firmware will be presented. The software and hardware realization and the remote control method will be presented. The upgrade framework can be widely used in devices with the same hardware and software interfaces.

In a complex machine such as a particle accelerator there are thousands of analogue signals that need monitoring and even more signals that could be used for debugging or as a tool for detecting symptoms of potentially avoidable problems. Usually it is not feasible to acquire and monitor all of these signals not only because of the cost but also because of cabling and space required. The RF system in the Large Hadron Collider is protected by multiple hardware interlocks that ensure safe operation of klystrons, superconducting cavities and all the other equipment. In parallel, a diagnostic system has been deployed to monitor the health of the klystrons. Due to the limited amount of space and the moderate number of signals to be monitored, a standard approach with a full VME or Compact PCI crate has not been selected. Instead, small embedded industrial computers with USB oscilloscopes chosen for the specific application have been installed. This cost effective, rapidly deployable solution will be presented, including existing and possible future installations as well as the software used to collect the data and integrate it with existing CERN infrastructure.

Em# project is a collaboration project between MAX IV Laboratory and ALBA Synchrotron to obtain a high performant four-channel electrometer. Besides the objective of accurate current measurements down to the pico-ampere range, the project pursues to establish a reusable instrumentation platform with time stamped data collection able to perform real time calculations for flexible feedback implementations. The platform is based on a FPGA responsible of acquisition and synchronization where a real-time protocol between the modules has been implemented (Harmony) [*]. The data acquired is transmitted via PCIe to a Single Board Computer with an embedded Linux distribution where high level processing and synchronization with upper levels of Control System is executed. In this proceeding, the reasons that lead to start a complex instrument development instead of using a Commercial On the Shelf (COTS) solution will be discussed. The results of the produced units will be analyzed in terms of accuracy and processing capabilities. Finally, different Em# applications in particle accelerators will be described, further widening the functionality of the current state-of-the-art instrumentation.[*] Present and Future of Harmony Bus, a Real-Time High Speed Bus for Data Transfer Between Fpga Cores, these proceedings

PandABox is a development project resulting from a collaboration between Synchrotron SOLEIL and Diamond Light Source started in October 2015. The initial objective driving the project was to provide multi-channel encoder processing for synchronizing data acquisitions with motion systems in experimental continuous scans. The resulting system is a multi-purpose platform well adapted for multi-technique scanning and feedback applications. This flexible and modular platform embeds an industrial electronics board with a powerful Xilinx Zynq 7030 SoC (Avnet PicoZed), FMC slot, SFP module, TTL and LDVS I/Os and removable encoder peripheral modules. In the same manner, the firmware and software framework has been developed in a modular way to be easily configurable and adaptable. The whole system is open and extensible from the hardware level up to integration with control systems like TANGO or EPICS. This paper details the hardware capabilities, platform performance, framework adaptability, and the project status at both sites.szhang@synchrotron-soleil.fr

The Cryomodule-On-Chip (CMOC) simulation engine is a Verilog implementation of a cryomodule model used for Low-Level RF development for superconducting cavities. The model includes a state-space model of the accelerating fields inside a cavity, the mechanical resonances inside a cryomodule as well as their interactions. The implementation of the model along with the LLRF controller in the same FPGA allows for live simulations of an RF system. This allows for an interactive simulation framework, where emulated cavity signals are produced at the same rate as in a real system and therefore providing the opportunity to observe longer time-scale effects than in software simulations as well as a platform for software development and operator training.

Complex control systems often require complex tools to facilitate daily operations in a way that assures the highest possible availability. Such a situation poses an engineering challenge, for which system complexity needs to be tamed in a way that everyday use becomes intuitive and efficient. The sensation of comfort and ease of use are matters of ergonomics and usability - very relevant not only to equipment but especially software applications, products and graphical user interfaces. The Controls Configuration Service (CCS) is a key component in CERN's data driven accelerator Control System. Based around a central database, the service provides a range of user interfaces enabling configuration of all different aspects of controls for CERN's accelerator complex. This paper describes the on-going renovation of the service with a focus on the evolution of the provided user interfaces, design choices and architectural decisions paving the way towards a single configuration platform for CERN's control systems in the near future.

Funding:This work was in part supported by the Horizon 2020 program of the European Union (iNEXT grant, project No. 653706)Originally conceived at ESRF and first deployed in 2005 MXCuBE, Macromolecular Xtallography Customized Beamline Environment, has with its successor MXCuBE2, become a successful international collaboration. The aim of the collaboration is to develop a beamline control application for macromolecular crystallography (MX) that are independent of underlying instrument control software and thus deployable at the MX beamlines of any synchrotron source. The continued evolution of the functionality offered at MX beamlines is to a large extent facilitated by active software development. New demands and advances in technology have led to the development of a new version of MXCuBE, MXCuBE3, The design of which was inspired by the results of a technical pre-study and user survey. MXCuBE3 takes advantage of the recent development in web technologies such as React and Redux to create an intuitive and user friendly application. The access to the application from any web browser further simplifies the operation and natively facilitates the execution of remote experiments.

The Scientific Data Management is a key aspect of the IT system of a user research facility like the MAX~IV Laboratory. By definition, this system handles data produced by the experimental user of such a facility. It could be perceived as easy as using an external hard drive to store the experimental data to carry back to the home institute for analysis. But on the other hand the "data" can be seen as more than just a file in a directory and the "management" not only a copy operation. Simplicity and a good User Experience vs security/authentication and reliability are among the main challenges of this project along with all the mindset changes. This article will explain all the concepts and the basic roll-out of the system at the MAX~IV Laboratory for the first users and the features anticipated in the future.

The SwissFEL timing system builds on MRF's event system products. Performance and functional requirements have pushed MRF timing components to its newest generation (300 series) providing active delay compensation, conditional sequence events, and topology identification among others. However, employing available hardware functionalities to implement complex and varying operational demands and provide them in the control system has its own challenges. After a brief introduction to the new MRF hardware this paper describes operational aspects of the SwissFEL timing and related control system applications. We describe a new technique for beam rate control and how this scheme is used for the machine protection system (MPS). We show how a well thought modular software-side design enables us to maintain various rep rates across the facility and allows us to implement complex triggering patterns with minimum development effort. We also discuss our timestamping method and its interface to the beam synchronous data acquisition system. Further we share our experience in timing network installation, monitoring and maintenance issues during commissioning phase of the facility.

Synchrotron Light Sources are required to operate on 24/7 schedules, while at the same time must be continuously upgraded to cover scientists needs of improving its efficiency and performance. These operation conditions impose rigid calendars to control system engineers, reducing to few hours per month the maintenance and testing time available. The SimulatorDS project has been developed to cope with these restrictions and enable test-driven development, replicating in a virtual environment the conditions in which a piece of software has to be developed or debugged. This software provides devices and scripts to easily duplicate or prototype the structure and behavior of any Tango Control System, using the Fandango python library* to export the control system status and create simulated devices dynamically. This paper will also present first large scale tests using multiple SimulatorDS instances running on a commercial cloud.* S.Rubio et al., "Dynamic Attributes and other functional flexibilities of PyTango", ICALEPCS'09, Kobe, Japan (2009)

The MIS section in the Computing division at ALBA Synchrotron designs and supports management information systems. This paper describes the streamlining of the work of 12 support groups into a single customer portal and issue management system. Prior to the change, ALBA was using five different ticket systems. To improve coordination, we searched tools able to support ITIL Service Management, as well as PRINCE2 and Agile Project Management. Within market solutions, JIRA, with its agile boards, calendars, SLAs and service desks, was the only solution with a seamless integration of both. Support teams took the opportunity to redesign their service portfolio and management processes. Through the UX design, JIRA has proved to be a flexible solution to customize forms, workflows, permissions and notifications on the fly, creating a virtuous cycle of rapid improvements, a rewarding co-design experience which results in highly fitting solutions and fast adoption. Team, project and service managers now use a single system to track requests in a timely manner, view trends, and get a consolidated view of efforts invested in the different beamlines and accelerators.

The BioMAX beamline at MAX IV is devoted to macromolecular crystallography and will achieve a high level of experimental automation when its full potential is reached due to the usage of high end instrumentation and comprehensive software environment. The control system is based on Tango and Sardana for managing the main elements of the beamline. Data acquisition and experiment control is done through MXCuBE v3, which interfaces with the control layer. Currently, the most critical elements such as the detector and diffractometer are already integrated into the control system, whereas the integration of the sample changer has already started. BioMAX has received its first users, who successfully collected diffraction data and provided feedback on the general performance of the control system and its usability. The present work describes the main features of the control system and its operation, as well as the next instrument integration plans

Funding:This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-728701Established control systems for scientific experimental facilities offer several levels of user interfaces to match domain-specific needs and preferences of experimentalists, operational and engineering staff. At the National Ignition Facility, the low-level device panels address technicians' need for comprehensive hardware control, while Shot Automation software allows NIF Shot Director to advance thousands of devices at once through a carefully orchestrated shot sequence. MATLAB scripting with NIF Layering Toolbox has enabled formation of intricate Deuterium-Tritium ice layers for fusion experiments. The latest addition to this family of user interfaces is the Target Area Alignment Tool (TAAT), which guides NIF operators through hundreds of measurement and motion steps necessary to precisely align targets and diagnostics for each experiment inside of the NIF's 10-meter target chamber. In this paper, we discuss how this new tool has integrated familiar spreadsheet calculations with intuitive visual aids and checklist-like scripting to allow NIF Process Engineers to automate and streamline alignment sequences, contributing towards NIF Shot Rate enhancement goals.

The superconducting magnet test facility at CERN, (SM18), has been using the Automatic Quench Analysis (AQA) software to analyse the quench data during the Large Hadron Collider (LHC) magnet test campaign. This application was developed using LabVIEW in the early 2000's by the Measurement Test and Analysis section (MTA) at CERN. During the last few years, the SM18 has been upgraded for the High Luminosity LHC (HL-LHC) magnet prototypes. These HL-LHC magnets demand a high flexibility of the software. The new requirements were that the analysis algorithms should be open, allowing contributions from engineers and physicists with basic programming knowledge, execute automatically a large number of tests, generate reports and be maintainable by the MTA team. The paper contains the description, present status and future evolutions of the new AQA soft-ware that replaces the LabVIEW application.

Funding:This work is supported by the Italian Ministry of Education, University, and Research (MIUR) with funds specifically assigned to the Italian National Institute of Astrophysics (INAF)The Cherenkov Telescope Array (CTA) project is an international initiative to build a next generation ground-based observatory for very high energy gamma-rays. Three classes of telescopes with different mirror size will be located in the northern and southern hemispheres. The ASTRI mini-array of CTA preproduction is one of the small sized telescopes mini-arrays proposed to be installed at the CTA southern site. The ASTRI mini-array will consist of nine units based on the end-to-end ASTRI SST-2M prototype already installed on Mt. Etna (Italy). The mini-array software system (MASS) supports the end to end ASTRI SST-2M prototype and miniarray operations. The ASTRI software integration team defined the procedures to perform effectively the integration test and release activities. The developer has to properly use the repository tree and branches according to the development status. We require that the software includes also specific sections for automated tests and that the software is well tested (in simulated and real system) before any release. Here we present the method adopted to release the first MASS version to support the ASTRI SST-2M prototype test and operation activities.* conforti@iasfbo.inaf.it

The development of process control systems for the cryogenic infrastructure at CERN is based on an automatic software generation approach. The overall complexity of the systems, their frequent evolution as well as the extensive use of databases, repositories, commercial engineering software and CERN frameworks led to further efforts towards improving the existing automation based software production methodology. A large number of control system upgrades were successfully performed for the Cryogenics in the LHC accelerator, applying the Continuous Integration practice integrating all software production tasks, tools and technologies. The production and maintenance of the control software for multiple cryogenic applications became more reliable while significantly reducing the required time and effort. This concept became a guideline for development of process control software for new cryogenic systems at CERN. This publication presents the software production methodology, as well as the summary of several years of experience with the enhanced automated control software production, already implemented for the Cryogenics of the LHC accelerator and the CERN cryogenic test facilities.

The Daniel K. Inouye Solar Telescope (DKIST) is currently under construction in Hawaii. The telescope control system comprises a significant number of subsystems to coordinate the operation of the telescope and its instruments. Integrating delivered subsystems into the control framework and managing existing subsystem versions requires careful management, including processes that provide confidence in the current operational state of the whole control system. Continuous software Quality Assurance provides test metrics on these systems using a Testing Automation Framework (TAF), which provides system and assembly test capabilities to ensure that software and control requirements are met. This paper discusses the requirements for a Quality Assurance program and the implementation of the TAF to execute it.

For the new FAIR accelerator complex at GSI the settings management system LSA is used. It is developed in collaboration with CERN and until now it is executed strictly serial. Nowadays the performance gain of single core processors have nearly stagnated and multicore processors dominate the market. This evolution forces software projects to make use of the parallel hardware to increase their performance. In this thesis LSA is analyzed and parallelized using different parallelization patterns like task and loop parallelization. The most common case of user interaction is to change specific settings so that the accelerator performs at its best. For each changed setting, LSA needs to calculate all child settings of the parameter hierarchy. To maximize the speedup of the calculations, they are also optimized sequentially. The used data structures and algorithms are reviewed to ensure minimal resource usage and maximal compatibility with parallel execution. The overall goal of this thesis is to speed up the calculations so that the results can be shown in a user interface with nearly no noticeable latency.

SKA (Square Kilometer Array) is a project aimed to build a very large radio-telescope, composed by thousands of antennae and related support systems. The overall orchestration is performed by the Telescope Manager (TM), a suite of software applications. In order to ensure the proper and uninterrupted operation of TM, a local monitoring and control system is developed, called TM Services. Fault Management (FM) is one of these services, and is composed by processes and infrastructure associated with detecting, diagnosing and fixing faults, and finally returning to normal operations. The aim of the study, introducing artificial intelligence algorithms during the detection phase, is to build a predictive model, based on the history and statistics of the system, in order to perform trend analysis and failure prediction. Based on monitoring data and health status detected by the software system monitor and on log files gathered by the ELK (Elasticsearch, Logstash, and Kibana) server, the predictive model ensures that the system is operating within its normal operating parameters and takes corrective actions in case of failure.

Bunch Arrival time Monitor (BAM) is a precise beam diagnostics instrument assessing the accelerator stability on-line. It is one of the most important components of the SwissFEL facility at the Paul Scherrer Institute (PSI). The overall monitor complexity demands the development of an extremely reliable control system that handles basic BAM operations. A prototype of such a system was created at PSI. The system is very flexible. It provides a set of tools allowing one to implement a number of advanced control features such as tagging experimental data with a SwissFEL machine pulse number or embedding high level control applications into the process controllers (IOC). The paper presents the structure of the BAM control setup. The operational experience with this setup is also discussed.

This paper describes the version control of oracle databases across different environments. The basis of this paper is the collaboration between the GSI Helmholtz Centre for Heavy Ion Research (GSI) and the European Organization for Nuclear Research (CERN). The goal is to provide a sufficient and practical concept to improve database synchronization and version control for a specific database landscape for the two research facilities. First, the relevant requirements for both research facilities were identified and compared, leading to the creation of a shared catalog of requirements. In the process database tools, such as Liquibase and Flyway, were used and integrated as prototypes into the Oracle system landscape. During the implementation of prototypes several issues were identified, which arise out of the established situation of two collaborating departments of the research facilities. Requirements on the prototype were, to be flexible enough to adapt to the given conditions of the database landscape. The creation of a flexible and adjustable system enables the two research facilities to use, synchronize and update the shared database landscape.

ALICE Data Point Service (ADAPOS) is a software architecture being developed for the Run 3 period of LHC, as a part of the effort to transmit conditions data from ALICE Detector Control System (DCS) to GRID, for distributed processing. ADAPOS uses Distributed Information Management (DIM), 0MQ, and ALICE Data Point Processing Framework (ADAPRO). DIM and 0MQ are multi-purpose application-level network protocols. DIM and ADAPRO are being developed and maintained at CERN. ADAPRO is a multi-threaded application framework, supporting remote control, and also real-time features, such as thread affinities, records aligned with cache line boundaries, and memory locking. ADAPOS and ADAPRO are written in C++14 using OSS tools, Pthreads, and Linux API. The key processes of ADAPOS, Engine and Terminal, run on separate machines, facing different networks. Devices connected to DCS publish their state as DIM services. Engine gets updates to the services, and converts them into a binary stream. Terminal receives it over 0MQ, and maintains an image of the DCS state. It sends copies of the image, at regular intervals, over another 0MQ connection, to a readout process of ALICE Data Acquisition.

At Solaris (National Synchrotron Radiation Center , Kraków ) we have deployed test VDI software to virtualize physical desktops in the control room to ensure stability, more efficient support, system updates, and restores. The test was aimed to accelerate the installation of new work places for the single users. Horizon software gives us an opportunity to create roles and access permission . VDI software has contributed to efficient management and lower maintenance costs of virtual machines than physical hosts. We are still testing VMware Horizon 7 at Solaris.

This paper presents the Automatic RElease Service (ARES) developed by the Industrial Controls and Safety systems group at CERN. ARES provides tools and techniques to fully automate the software release procedure. The service replaces release mechanisms, which in some cases were cumbersome and error prone, by an automated procedure where the software release and publication is completed with a few mouse clicks. ARES allows optimizing the time and the work to be performed by developers in order to carry out a new release. Consequently, this enables more frequent releases and therefore a quicker reaction to user requests. The service uses standard technologies (Jenkins, Nexus, Maven, Drupal, MongoDB) to checkout, build, package and deploy software components to different repositories (Nexus, EDMS), as well as the final publication to Drupal web sites.

The Square Kilometre Array (SKA) will be the world's largest and most sensitive radio observatory ever built. SKA is currently completing the pre-construction phase before initiating mass construction phase 1, in which two arrays of radio antennas - SKA1-Mid and SKA1-Low - will be installed in the South Africa's Karoo region and Western Australia's Murchinson Shire, each covering a different range of radio frequencies. The SKA1-Mid array comprises 130 15-m diameter dish antennas observing in the 350 MHz-14 GHz range and will be remotely orchestrated by the SKA Telescope Manager (TM) system. To enable onsite and remote operations each dish will be equipped with a Local Monitoring and Control (LMC) system responsible to directly manage and coordinate antenna instrumentation and subsystems, providing a rolled-up monitoring view and high-level control to TM. This paper gives a status update of the antenna instrumentation and control software design and provides details on the LMC software prototype being developed.

The high brilliance Gamma Beam System (GBS) at ELI-NP will deliver quasi-monochromatic gamma beams with a high spectral density (10, 000 photons/s/eV) and high degree of linear polarization (>95%). The Gamma Beam Delivery and Diagnostics (GBDD) of ELI-NP is implemented to deliver the gamma beams to the experimental setups and to monitor the characteristics of the beams. An EPICS control system is developed for the GBDD to support two main categories of equipment: i) equipment for the delivery of the gamma beam including vacuum systems, collimators, alignment platforms, and moveable beam dumps; ii) devices to be used during the operation of the GBS for diagnostics and monitoring including digitizers, power supplies, detectors, and profile system. High-level applications for the Gamma Beam diagnostics system are under development to complement the real-time measurements and monitoring including energy spread measurement, flux and polarization measurement, spatial profile monitor and time structure monitor. This paper describes all the aspects of the EPICS Control System for ELI-NP GBDD, including the hardware integration, network architecture, and high-level applications.

The Australian Square Kilometre Pathfinder (ASKAP) is a radio telescope array in Western Australia. A third of the 36 telescopes forming the array have been fully commissioned and are in use under the early science program. The construction phase for the rest of the array has now completed and commissioning is continuing. This report continues on from the last status update and addresses new challenges as the telescope moves into the operational phase. The architecture of the system has proven robust, however some of the third party software choices have been reviewed as new software packages have appeared in the years since the initial adoption. We present the reasoning behind replacing some of our processes and software packages to ensure long-term operation of the instrument.

Particle accelerators are complex machines with fast and high power absorption peaks. Power quality is a critical aspect for correct operation. External and internal disturbances can have significant repercussions causing beam losses or severe perturbations. Mastering the load and understanding how network disturbances propagate across the network is a crucial step for developing the grid model and realizing the limits of the existing installations. Despite the fact that several off-the-shelf solutions for real time data acquisition are available, an in-house FPGA based solution was developed to create a distributed measurement system. The system can measure power and power quality on demand as well as acquire raw current and voltage data on a defined trigger, similar to a distributed oscilloscope. In addition, the system allows recording many digital signals from the high voltage switchgear enabling electrical perturbations to be easily correlated with the state of the network. The result is a scalable system with fully customizable software, written specifically for this purpose. The system prototype has been in service for two years and full-scale deployment is currently ongoing.

Linear accelerator technology has been widely applied to radiotherapy machines and there has been an increasing demand of the machines in Thailand over the recent years. An attempt to increase the availability of the low-cost machines has been proposed for the domestic use purposes. Currently, the prototype of the 6 MeV medical linear accelerator is under development at Synchrotron Light Research Institute (SLRI) in Nakorn Ratchasima, Thailand. For beam shaping purposes a so-called secondary collimator is utilized with different size arrangement of the collimator jaws. The collimator motion control is one of the necessary machine subsystems for producing the desired field size of the beam. In this paper, the FPGA-based motion control system of the machine prototype is presented. The programmable logic part of the hardware is designed in VHDL for digital processing. The main motion control algorithm is implemented in the main processor of Zedboard FPGA. Communication between the motion control subsystem and the main control system software of the machine is also described.

The Muon-to-Central Trigger Processor Interface (MUCTPI) of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN will be upgraded to an ATCA blade system for Run 3. The new design requires development of new communication models for control, configuration and monitoring. A System-on-Chip (SoC) with a programmable logic part and a processor part will be used for communication to the run control system and to the MUCTPI processing FPGAs. Different approaches have been compared. First, we tried an available UDP-based implementation in firmware for the programmable logic. Although this approach works as expected, it does not provide any flexibility to extend the functionality to more complex operations, e.g. for serial protocols. Second, we used the SoC processor with an embedded Linux operating system and an application-specific software written in C++ using a TCP remote-procedure-call approach. The software is built and maintained using the Yocto/OpenEmbedded framework. This approach was successfully used to test and validate the MUCTPI prototype. A third approach under investigation is the option of porting the ATLAS run control software directly to the embedded Linux.

With high intensity beams, a precise measurement and effective correction of the betatron coupling is essential for the performance of the Large Hadron Collider (LHC). In order to measure this parameter, the LHC transverse damper(ADT), used as an AC dipole, will provide the necessary beam excitation. The beam oscillations will be recorded by the Beam Positions Monitors and transmitted to dedicated analysis software. We set up the project with a 3-layer software architecture: The central node is a java server orchestrating the different actors: The Graphical User Interface, the control and triggering of the ADT AC dipole, the BPMs, the oscillation analysis (partly in python), and finally the transmission of the correction values. The whole system, is currently being developed in a team using Scrum, an iterative and incremental agile software development framework. In this paper we present an overview of this system, experience from machine development and commissioning as well as how scrum helped us to achieve our goals. Improvement and re-use of the architecture with a nice decoupling between data acquisition and data analysis are also briefly discussed.

This paper describes a new software tool recently developed at CERN called (New CPS Beam Optimiser). This application allows the automatic optimization of beam properties using a statistical method, which has been modified to suit the purpose. Tuning beams is laborious and time-consuming, therefore, to gain operational efficiency, this new method to perform an intelligent automatic scan sequence has been implemented. The application, written in JavaFX, uses CERN control group standard libraries and is quite simple. The GUI is user-friendly and allows operators to configure different optimization processes in a dynamic and easy way. Different measurements, complemented by simulations, have therefore been performed to try and understand the response of the algorithm. These results are presented here, along with the modifications still needed in the original mathematical libraries.

Funding:Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.EPICS is widely used software infrastructure to control Particle Accelerators, its Channel Access (CA) network protocol for communication with Input/Output Controllers (IOCs) is easy to implement in hardware. Many vendors provide CA support for their devices. The RHIC Control System provides control of more than 400, 000 parameters through Accelerator Data Objects (ADO) software abstraction layer. In this paper we present software bridge, which allows to cross-communicate between ADO and EPICS devices. It consists of two separate programs: an ADO manager, which hosts the ADO parameters and executes caput() request to modify EPICS PV when parameter is changed; and an epics2ado program which monitors the EPICS PVs and notifies the ADO manager. This approach have been implemented in integration of the NSLSII PSC hardware interface into RHIC Controls System.

Funding:This work is supported by National Natural Science Foundation of China(61333003) and Science and Technology Development Foundation of China Academy of Engineering Physics (14-FZJJ-0422).Rapidly changing demands for interoperability among heterogeneous systems leads to a paradigm shift from pre-defined control strategies to dynamic customization within many automation systems, e.g., large-scale scien-tific facilities. However, today's mass systems are of a very static nature. Fully changing the control process requires a high amount of expensive manual efforts and is quite error prone. Hence, flexibility will become a key factor in the future control systems. The adoption of web services and Service-Oriented Architecture (SOA) can provide the requested capability of flexibility. Since the adaptation of SOAs to automation systems has to face time-constrained requirements, particular attention should be paid to real-time web services for deterministic behaviour. This paper proposes a novel framework for the integration of a Time-Constrained SOA (TcSOA) into mass automation systems. Our design enables service encapsulation in filed level and evaluates how real time technologies can be synthesized with web services to enable deterministic performance.

A C/C++ software improvement process (SIP4C/C++) has been increasingly applied by the CERN accelerator Controls group since 2011, addressing technical and cultural aspects of our software development work. A first paper was presented at ICALEPCS 2013*. On the technical side, a number of off-the-shelf software products have been deployed and integrated, including Atlassian Crucible (code review), Google test (unit test), Valgrind (memory profiling) and SonarQube (static code analysis). Likewise, certain in-house developments are now operational such as a Generic Makefile (compile/link/deploy), CMX (for publishing runtime process metrics) and Manifest (capturing library dependencies). SIP4C/C++ has influenced our culture by promoting integration of said products into our binaries and workflows. We describe our current status for technical solutions and how they have been integrated into our environment. Based on testimony from four project teams, we present reasons for and against adoption of individual SIP4C/C++ products and processes. Finally, we show how SIP4C/C++ has improved development and delivery processes as well as the first-line support of delivered products.*http://jacow.org/ICALEPCS2013/papers/moppc087.pdf, http://jacow.org/ICALEPCS2013/posters/moppc087_poster.pdf

A large part of the CERN Accelerator Control System is written in Java by around 180 developers (software engineers, operators, physicists and hardware specialists). The codebase contains more than 10 million lines of code, which are packaged as 1000+ JARs and are deployed as 600+ different client/server applications. All this software are produced using CommonBuild Next Generation (CBNG), an enterprise build tool implemented on top of industry standards, which simplifies and standardizes the way our applications are built. CBNG not only includes general build tool features (such as dependency management, code compilation, test execution and artifact uploading), but also provides traceability throughout the software life cycle and makes releases ready for deployment. The interface is kept as simple as possible: the users declare the dependencies and the deployment units of their projects in one file. This article describes the build process, as well as the design goals, the features, and the technology behind CBNG.

The KSTAR plasma control system has very powerful monolithic software architecture that has dedicated centralized system architecture. However, due to increasing of real time functionality on distributed local control system, we need a flexible high-performance software framework. A new real time core engine program inherited design philosophy from the Very Large Telescope (VLT) control software. A new Tool for Advanced Control (TAC) engine was based on C++ standard run on Linux. It is a multithreaded core engine program for execution of real time application. The elemental building blocks are chained together to form a control application."Design and implementation of a standard framework for KSTAR control system", FED, Volumes 89, 2015 "Designing a common real-time controller for VLT applications", Proc. of SPIE Vol. 5496

In developing the control system for the FAIR accelerator complex we encountered strict latency and throughput contraints on the timely supply of data to devices controlling ramped magnets. In addition, the timing hardware that interfaces to the White Rabbit timing network may be shared by multiple processes on a single front-end computer. This paper describes the interprocess communication and resource-sharing system, and the consequences of using the D-Bus message bus. Then our experience of improving latency and throughput performance to meet the realtime requirements of the control system is discussed. Work is also presented on prioritisation techniques to allow time-critical services to share the bus with other components.

The linear induction accelerator LIA-20 for radiography is a pulsed machine designed to provide three consecutive electron bunches. Since every pulse is a distinctive experiment, it is of high importance to provide coherence of the facility state and the experimental data. This paper presents overall software architecture. Challenges and particular approaches to designing of a pulsed machine control system using Tango are discussed.

I. Arredondo, J. Jugo
University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain

Nowadays modern accelerators are starting to use virtualization to implement their control systems. Following this idea, one of the possibilities is to use containers. Containers are highly scalable, easy to produce/reproduce, easy to share, resilient, elastic and low cost in terms of computational resources. All of those are characteristics that fit with the necessities of a well defined and versatile control system. In this paper, a control structure based on this paradigm is discussed. Firstly the technologies available for this task are briefly compared. Starting from containerizing tools and following with the container orchestration technologies. As a result Kubernetes and Docker are selected. Then, the basis of Kubernetes/Docker and how it fits into the control of an accelerator is stated. Following the control applications suitable to be containerized are analyzed. It includes electronic log systems, archiving engines, middleware servers,… Finally, a particular structure for an accelerator based on EPICS as middleware is sketched.

This paper describes the design and the development of an innovative management software for the accelerators beamlines at INFN-LNS. The Graphical User Interface, the data exchange protocol, the software functionality and the hardware will be illustrated. Compared to traditional platforms for the accelerators console, at INFN-LNS we have developed a new concept of control system and data acquisition framework, based on a data structures server which so far has never been used for supervisory control. We have chosen Redis as a highly scalable data store, shared by multiple and different processes. With such system it is possible to communicate cross-platform, cross-server or cross-application in a very simple way, using very lightweight libraries. A complex and highly ergonomic Graphic User Interface allows to control all the parameters with a user-friendly interactive approach, ensuring high functionality so that the beam operator can visually work in a realistic environment. All the information related to the beamline elements involved in the beam transport, can be stored in a centralized database, with suitable criteria to have a historical database.

Developing operational User Interfaces (UI) can be challenging, especially during machine upgrade or commissioning where many changes can suddenly be required. An agile Integrated Development Environment (IDE) with enhanced refactoring capabilities can ease the development process. Inspector is an intuitive UI oriented IDE allowing for development of control interfaces and data processing. It features a state of the art visual interface composer fitted with an ample set of graphical components offering rich customization. It also integrates a scripting environment for soft real time data processing and UI scripting for complex interfaces. Furthermore, Inspector supports many data sources. Alongside the short application development time, it means Inspector can be used in early stages of device engineering or it can be used on top of a full control system stack to create elaborate high level control UIs. Inspector is now a mission critical tool at CERN providing agile features for creating and maintaining control system interfaces. It is intensively used by experts, machine operators and performs seamlessly from small test benches to complex instruments such as LHC or LINAC4.

JavaFX, the GUI toolkit included in the standard JDK, provides charting components with commonly used chart types, a simple API and wide customization possibilities via CSS. Nevertheless, while the offered functionality is easy to use and of high quality, it lacks a number of features that are crucial for scientific or controls GUIs. Examples are the possibility to zoom and pan the chart content, superposition of different plot types, data annotations, decorations or a logarithmic axis. The standard charts also show performance limitations when exposed to large data sets or high update rates. The article will describe the how we have implemented the missing features and overcome the performance problems.

ESPRESSO, the Echelle SPectrograph for Rocky Exoplanet and Stable Spectroscopic Observations, is undergoing the final testing phases before being shipped to Chile and installed in the Combined Coudé Laboratory (CCL) at the ESO Very Large Telescope site. The integration of the instrument takes place at the Astronomical Observatory of Geneva. It includes the full tests of the Instrument Control Electronics (ICE) and Instrument Control Software (ICS), designed and developed at the INAF-Astronomical Observatory of Trieste. ESPRESSO is the first ESO-VLT permanent instrument which electronics is based on Beckhoff PLCs. Two PLC CPUs shares all the workload of the ESPRESSO functions and through the OPC-UA protocol the PLC communicates with the instrument control software based on VLT control software package. In this phase all the devices and subsystems of ESPRESSO are installed, connected together and verified, mimicking the final working conditions in Chile. This paper will summarize the features of the ESPRESSO control system, the tests performed during the integration in Europe and the main performance obtained before the integration of the whole instrument "on sky" in South America.

Funding:China Spallation Neutron Source and the science and technology project of Guangdong province under grand No. 2016B090918131'2017B090901007In this paper we introduced the design and implementation of the neutron instrument experiment control system in CSNS. The task of the control system is to complete the spectrometer experiment, and meanwhile provides experimental data for physical analysis. The control system of instrument in CSNS coordinate device control, data acquisition and analysis software, electronics, detector, sample environment and many other subsystems. This paper descibres the system architecture, timing system, device control and software of instrument control in CSNSCorresponding author: Jian ZHUANG, e-mail: zhuangj@ihep.ac.cn

The Large Hadron Collider (LHC) is equipped with a complex collimation system to protect sensitive equipment from unavoidable beam losses. Collimators are positioned close to the beam using an alignment procedure. Until now they have always been aligned assuming no tilt between the collimator and the beam, however, tank misalignments or beam envelope angles at large-divergence locations could introduce a tilt limiting the collimation performance. This paper describes three different algorithms to automatically align a chosen collimator at various angles. The implementation was tested with and without beam at the SPS and the LHC. No human intervention was required and the three algorithms converged to the same optimal tilt angle.

When the European Muon beamlines at the ISIS pulsed neutron and muon source [1] upgraded their front end magnets, it was desired that these new magnets should be controllable remotely. This work was undertaken by the team responsible for instrument control, who are in the process of a phased upgrade of instrument control software from a locally developed system (SECI) to an EPICS [2] based one (IBEX [3,4]). To increase the complexity of the task, parts of the front end needed to be controlled only by an individual instrument beamline, whilst some values needed to be tuned to the best compromise available for all three beamlines. Furthermore, the muon instruments were not ready for an upgrade to a full IBEX system at that time. By combining SECI, IBEX and the Mantid [5] data reduction package the required control and tuning has been achieved. This paper will give details of the challenges, the topology of the solution, how the current mixed system is performing, and what will be changed when the muon instruments are converted to IBEX.

The SKA project is an international effort (10 member and 10 associated countries with the involvement of 100 companies and research institutions) to build the world's largest radio telescope. The SKA Telescope Manager (TM) is the core package of the SKA Telescope aimed at scheduling observations, controlling their execution, monitoring the telescope and so on. To do that, TM directly interfaces with the Local Monitoring and Control systems (LMCs) of the other SKA Elements (e.g. Dishes), exchanging commands and data with them by using the TANGO controls framework. TM in turn needs to be monitored and controlled, in order its continuous and proper operation is ensured. This higher responsibility together with others like collecting and displaying logging data to operators, performing lifecycle management of TM applications, directly deal - when possible - with management of TM faults (which also includes a direct handling of TM status and performance data) and interfacing with the virtualization platform compose the TM Services (SER) package that is discussed and presented in the present paper.

Developing and deploying software systems for data acquisition and experiment control in a beamline laboratory can be a very challenging task. In certain cases there is the need to replace and modernize an existing system in order to accommodate substantial beamline upgrades. DonkiOrchestra is a TANGO-based framework for data acquisition and experiment control developed at Elettra Sincrotrone Trieste. The framework is based on an advanced software trigger-driven paradigm developed in-house. DonkiOrchestra is meant to be general and flexible enough to be adapted to the development needs of different laboratories and their data acquisition requirements. This presentation outlines the upgrade of the LabVIEW-based TwinMic beamline control system which hosts a unique soft X-ray transmission and emission microscope. Other than the technical demanding tasks of interfacing and controlling old and new instrumentation with DonkiOrchestra, this presentation discusses the various challenges of upgrading the software in a working synchrotron beamline.

Detectors currently being commissioned at Diamond Light Source (DLS) bring the need for more sophisticated control and data acquisition software. The Excalibur 1M and 3M are modular detectors comprised of rows of identical stripes. The Odin framework emulates this architecture by operating multiple file writers on different server nodes, managed by a central controller. The low-level control and communication is implemented in a vendor supplied C library with a set of C-Python bindings, providing a fast and robust API to control the detector nodes, alongside a simple interface to interact with the file writer instances over ZeroMQ. The file writer is a C++ module that uses plugins to interpret the raw data and provide the format to write to file, allowing it to be used with other detectors such as Percival and Eiger. At DLS we implement an areaDetector driver to integrate Odin with the beamline EPICS control system. However, because Odin provides a simple HTTP Rest API, it can be used by any site control system. This paper presents the architecture and design of the Odin framework and illustrates its usage as a controller of complex, modular detector systems.

Control systems for scientific instruments and experiments would benefit from hardware and software platforms that provide flexible resources to fulfill various installation requirements. uSOP is a Single Board Computer based on ARM processor and Linux operating system that makes it possible to develop and deploy easily various control system frameworks (EPICS, Tango) supporting a variety of different buses (I2C, SPI, UART, JTAG), ADC, General Purpose and specialized digital IO. In this work we present a live demo of a uSOP board, showing a running IOC for a simple control task. We also describe the deployment of uSOP as a monitoring system architecture for the Belle2 experiment, presently under construction at the KEK Laboratory (Tsukuba, Japan).

When feedback loops latencies shall be lower than milliseconds range the performance of FPGA-based solutions are unrivaled. One of the main difficulties in these solutions is how to make compatible a full custom digital design with a generic interface and the high-level control software. ALBA simplified the development process of electronic instrumentation with the use of Harmony Bus (HB)*. Based on the Self-Describing Bus, developed at CERN/GSI, it creates a bus framework where different modules share timestamped data and generate events. This solution enables the high-level control software in a Single Board Computer or PC, to easily configure the expected functionally in the FPGA and manage the real-time data acquired. This framework has been already used in the new Em# electrometer**, produced within a collaboration between ALBA and MAXIV, that is currently working in both synchrotrons. Future plans include extending the FPGA cores library, high-level functions and the development of a new auto-generation tool able to dynamically create the FPGA configuration file simplifying the development process of new functionalities.* 'A Generic Fpga Based Solution for Flexible Feedback Systems', PCaPAC16, paper FRFMPLCO06 ** 'Em# Electrometer Comes To Light', ICALEPS 2017 Abstract Submitted

The Laser MegaJoule (LMJ) is a 176-beam laser facility, located at the CEA CESTA Laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy to targets, for high energy density physics experiments, including fusion experiments. The first 8-beams bundle was operated in October 2014 and a new bundle was commissioned in October 2016. The next two bundles are on the way. PARC * is the computational system used to automate the laser setup and the generation of shot report with all the results acquired during the shot sequence process (including alignment and synchronization). It has been designed to run sequence in order to perform a setup computation or a full facility shot report in less than 15 minutes for 1 or 176 beams. This contribution describes how this system solves this challenge and enhances the overall process.* PARC: French acronym for automatic bundle settings prediction.

Versatile Macromolecular in-situ (VMXi) is the first beamline at Diamond Light Source (DLS) to be entirely automated with no direct user interaction to set up and control experiments. This marks a radical departure from other beamlines at the facility and it has presented a significant design challenge to General Data Acquisition (GDA), the in-house software that manages beamline data collection. GDA has become a reactive controller for continual, uninterrupted processing of all user experiments. A major achievement has been to demonstrate that it is possible to successfully deliver a suitable architectural implementation for automation developed within a standard integrate development environment (IDE). There is no need for specialised software or a domain specific language for automation. The objective is to: review VMXi project with the emphasis on hardware configuration and experiment processing; describe the software and control architecture for automation; and provide a general set of guidelines for developing software for automation at a scientific facility.

A significant part of the experiments run at Alba Synchrotron* involve scans. The continuous scans were developed first ad hoc and latter the controls group dedicated important efforts to standardize them across the Alba instruments, enhancing the overall performance and allowing the users to better exploit the beamtime**. Sardana***, the experiment control software used at Alba, among other features, aims to provide a generic way of programming and executing continuous scans. This development just achieved a major milestone - an official version with a stable API. Recently the Alba instruments were successfully upgraded to profit from this release. In this paper we describe the evolution of these setups as well as the new continuous scan applications run at Alba. On the one hand, the most relevant hardware solutions are presented and assessed. On the other hand the Sardana software is evaluated in terms of its utility in building the continuous scans setups. Finally we discuss the future improvements plan designed to satisfy the ever-increasing requirements of the scientists.* http://www.albasynchrotron.es ** Z. Reszela et al. 'Implementation of Continuous Scans Used in Beamline Experiments at Alba Synchrotron', ICALEPCS2013 *** http://www.sardana-controls.org

Funding:INAFUser-Centered Design is a powerful approach for designing UIs that match and satisfy users' skills and expectations. Interviews, affinity diagrams, personas, usage scenarios are some of the fundamental tools for gathering and analysing relevant information. We applied these techniques to the development of the UI for the control room of the Square Kilometre Array (SKA) telescopes. We interviewed the personnel at two of the SKA precursors, LOFAR and MeerKAT, with the goal of understanding what features satisfy operators' needs and which ones can be improved. What was learned includes several usability issues dealing with fragmentation and low cohesiveness of the UIs, some gaps, and an excessive number of user actions needed to achieve certain goals. Low usability of the UI and the large scale of SKA are two challenges in developing its UI because they affect the extent to which operators can focus on important data, the likelihood of human errors and their consequences. This paper illustrates the followed method, provides examples of some of the artefacts that were produced and describes and motivates the resulting usability recommendations which are specific for SKA.

The life cycle of an ILL instrument has two main stages. During the design of the instrument, a precise but static 3D model of the different components is developed. Then comes the exploitation of the instrument of which the control by the Nomad software allows scientific experiments to be performed. Almost all instruments at the ILL have moveable parts often hidden behind radiological protection elements such as heavy concrete walls or casemate. Massive elements of the sample environment like magnets and cryostats must be aligned in the beam. All those devices are able to collide with the surrounding environment. To avoid those types of accident, the instrument moves must be checked by a pre-experiment simulation that will reveal possible interferences. Nomad 3D is the application that links the design and the experiment aspects providing an animated 3D physical representation of the instrument while it moves. Collision detection algorithms will protect the moveable parts from crashes. During an experiment, it will augment the reality by enabling to "see" behind the walls. It will provide as well a precise virtual representation of the instrument during the simulations.

The CERN Control and Monitoring Platform (C2MON) is an open-source platform for industrial controls data acquisition, monitoring, control and data publishing. C2MON's high-availability, redundant capabilities make it particularly suited for a large, geographically scattered context such as CERN. The C2MON platform relies on the Java technology stack at all levels of its architecture. Since end of 2016, CERN offers a platform as a service (PaaS) solution based on RedHat Openshift. Initially envisioned at CERN for web application hosting, Openshift can be leveraged to host any software stack due to its adoption of the Docker container technology. In order to make C2MON more scalable and compatible with Cloud Computing, it was necessary to containerize C2MON components for the Docker container platform. Containerization is a logical process that forces one to rethink a distributed architecture in terms of decoupled micro-services suitable for a cloud environment. This paper explains the challenges met and the principles behind containerizing a server-centric Java application, demonstrating how simple it has now become to deploy C2MON in any cloud-centric environment.

Funding:Centro Científico Tecnológico de Valparaíso (CONICYT FB-0821)The ALMA Common Software (ACS) framework provides Bulk Data Transfer (BDT) service implementations that need to be updated for new projects that will use ACS, such as the Cherenkov Telescope Array (CTA) and other projects, with most cases having quite different requirements than ALMA. We propose a new open-source BDT service for ACS based on ZeroMQ, that meets CTA data transfer specifications while maintaining retro-compatibility with the closed-source solution used in ALMA. The service uses the push-pull pattern for data transfer, the publisher-subscriber pattern for data control, and Protocol Buffers for data serialization, having also the option to integrate other serialization options easily. Besides complying with ACS interface definition to be used by ACS components and clients, the service provide an independent API to be used outside the ACS framework. Our experiments show a good compromise between throughput and computational effort, suggesting that the service could scale up in terms of number of producers, number of consumers and network bandwidth.

Tango provides the Tango device server object model(TDSOM), whose basic idea is to treat each device as an object. The TDSOM can be divided into 4 basic elements, including the device, the server, the database and the application programmers interface. On the basis of the TDSOM, we design a centralized platform for software device management, named VisualDM, providing standard servers and client management software. Thus the functionality of VisualDM are mutli-folds: 1) dynamically defining or configuring the composition of a device container at run-time; 2) visualization of remote device management based on system scheduling model; 3) remote deployment and update of software devices; 4) registering, logouting, starting and stopping devices. In this paper, platform compositions, module functionalities, the design concepts are discussed. The platform is applied in computer integrated control systems of SG facilities.

The Extremely Large Telescope is a 39-metre ground-based telescope being built by ESO. It will be the largest optical/near-infrared telescope in the world and first light is foreseen for 2024. The overall ELT Linux development environment will be presented with an in-depth presentation of its core, the waf build system, and the customizations that ESO is currently developing. The ELT software development for telescopes and instruments poses many challenges to cover the different needs of such a complex system:a variety of technologies, Java, C/C++ and Python as programming languages, Qt5 as the GUI toolkit, communication frameworks such as OPCUA, DDS and ZeroMQ, the interaction with entities such as PLCs and real-time hardware, and users, in-house and not, looking at new usage patterns. All this optimized to be on time for the first light. To meet these requirements, a set of tools was selected. Its content ranges from an IDE, to compilers, interpreters, analysis and debugging tools for the various languages and operations. At the heart of the toolkit lies the modern build framework waf:a versatile tool written in Python selected due to its multiple language support and high performance.

ELI-ALPS (Extreme Light Infrastructure - Attosecond Light Pulse Source) is a new Research Infrastructure under implementation in Hungary. The infrastructure will consist of various systems (laser sources, beam transport, secondary sources, end stations) built on top of common subsystems (HVAC, cooling water, vibration monitoring, vacuum system, etc.), yielding a heterogeneous environment. To support the full control software development lifecycle for this complex infrastructure a flexible hierarchical configuration model has been defined, and a supporting toolset has been developed for its management. The configuration model is comprehensive as it covers all relevant aspects of the entire controlled system, the control software components and all the necessary connections between them. Furthermore, it supports the generation of virtual environments that approximate the hardware environment for software testing purposes. The toolset covers configuration functions such as storage, version control, GUI editing and queries. The model and tools presented in our paper are not specific to ELI-ALPS or to the ELI community, they may be useful for other research institutions as well.

The Square Kilometre Array (SKA) is a global project that aims to build a large radio telescope in Australia and South Africa with around 100 organizations in 20 countries engaged in its detailed design. The Signal and Data Transport (SaDT) consortium, includes the software and hardware necessary for the transmission of data and information between elements of SKA, and the Synchronization and Timing (SAT) system provides frequency and clock signals. The SAT local monitoring and control system (SAT. LMC) monitors and controls the SAT system. SAT. LMC has its team members distributed across India, South Africa and UK. This paper discusses the systems engineering methods adopted by SAT. LMC on interface design with work packages owned by different organizations, configuration control of design artefacts, and quality control through intermediate releases, design assumptions and risk management. The paper also discusses the internal SAT. LMC team communication model, cross culture sensitivity and leadership principles adopted to keep the project on track and deliver quality design products whilst staying flexible to the changes in the overall SKA program.

The TANGO Controls Framework* continues to mature and be adopted by new sites and applications. This paper will describe how TANGO has moved closer to industry with the creation of startups and addressing industrial use cases. It will describe what progress has been made since the last ICALEPCS in 2015 to ensure the sustainability of TANGO for scientific and industrial users. It will present TANGO web based technologies and the deployment of TANGO in the cloud. Furthermore it will describe how the community has re-organised itself to fund and improve code sharing, documentation, code quality assurance and maintenance.* http://tango-controls.org

Funding:U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344The National Ignition Facility (NIF) is the world's largest and most energetic laser experimental facility with 192 beams capable of delivering 1.8 megajoules and 500-terawatts of ultraviolet light to a target. Officially commissioned as an operational facility on March 21, 2009, NIF is expected to conduct research experiments thru 2039. The 30-year lifespan of the control system presents several challenges in meeting reliability, availability, and maintainability (RAM) expectations. As NIF continues to expand on its experimental capabilities, the control system's software base of 3.5 million lines of code grows with most of the legacy software still in operational use. Supporting this software is further complicated by technology life cycles and turnover of senior experienced staff. This talk will present lessons learned and new initiatives related to technology refreshes, risk mitigation, and changes to our software development and test methodology to ensure high control system availability for supporting experiments throughout NIF's lifetime.LLNL-ABS-727374

areaDetector is an EPICS framework for 2-D and other types of detectors that is widely used in synchrotron and neutron facilities. Recent enhancements to the EPICS areaDetector module will be presented. -Plugins can now run multiple threads to significant increase performance -Scatter/gather capability for plugins to run in parallel -ImageJ plugin that uses EPICS V4 pvAccess rather than Channel Access. Provides structured data with atomic update, and better performance than Channel Access plugin. -ImageJ plugin that allows graphically defining detector readout region, ROIs, and overlays. -Plugins can now be reprocessed without receiving a new NDArray for testing effect of different parameters, etc. A roadmap for future developments will also be presented.

The ASTRI SST-2M telescope is a prototype proposed for the Small Size class of Telescopes of the Cherenkov Telescope Array (CTA). The ASTRI prototype adopts innovative solutions for the optical system, which poses stringent requirements in the design and development of the Telescope Control System (TCS), whose task is the coordination of the telescope devices. All the subsystems are managed independently by the related controllers, which are developed through a PC-Based technology and making use of the TwinCAT3 environment for the software PLC. The TCS is built upon the ALMA Common Software framework and uses the OPC-UA protocol for the interface with the telescope components, providing a simplified full access to the capabilities offered by the telescope subsystems for normal operation, testing, maintenance and calibration activities. In this contribution we highlight how the ASTRI approach for the design, development and implementation of the TCS has made the prototype a stand-alone intelligent and active machine, providing also an easy way for the integration in an array configuration such as the future ASTRI mini-array proposed to be installed at the southern site of the CTA.

Funding:Extreme Light Infrastructure, CZ.1.05/1.1.00/02.0061The ELI Beamlines facility is a Petawatt laser facility in the final construction and commissioning phase in Prague, Czech Republic. The central control system connects and controls more than 40 complex subsystems (lasers, beam transport, beamlines, experiments, facility systems, safety systems) with hundreds of cameras. For this, a comprehensive set of standard solutions is provided: Hardware interface standards guarantee ad-hoc software integration, for commonly used models, standardised auxiliary hardware (triggering: optical/TTL, power supplies) is available. Information on key parameters (vacuum compatibility, noise levels) is collected. 95% of cameras are interfaced using an vendor-independent C+±SDK. Exceptions are only made for special detectors (for example: wavefront sensors, x-ray cameras). By using a strict model-based approach and a component-based design, all cameras and 2D-detectors can be controlled with the same C+±API. This leads to standardized GUIs, TANGO-servers,..

In X-ray experimental stations at SPring-8, beamline staff and experimental users sometimes need to reconfigure the measurement system for new experiments. Quick reconfiguration for the system is required and this resulted in elaborated work. Aim of DARUMA is to provide standardized procedure for constructing a flexible system of the data collection and control system for experimental stations. It utilizes the control framework MADOCA II* developed for the distributed control of accelerators and beamlines at SPring-8. Unified control procedure with abstracted text-based messaging helps to reduce significant time and cost for preparing the measurement system. DARUMA provides the applications for 2D detectors such as PILATUS, pulse motor and trigger system used in stations. Image data are collected with metadata into NoSQL database, Elasticsearch. Analysis tools for image such as online monitoring and offline analysis are also provided. User applications can be easily developed with Python and LabVIEW. DARUMA can be flexibly applied to experimental stations and is being implemented into BL03XU at SPring-8. We are also planning to introduce it into other experimental stations.* T. Matsumoto et al., Proceedings of ICALEPCS 2013, p.944

Funding:National Natural Science Foundation of China(No.11375186, No.21327901)FELiChEM is an infrared free-electron laser user facility under construction at NSRL. The design of the interlock system of FELiChEM is based on EPICS. The interlock system is made up of the hardware interlock system and the software interlock system. The hardware interlock system is constructed with PROFINET and redundancy technology. The software interlock system is designed with an independent configuration file to improve the flexibility. The test results of the prototype system are also described in this paper.

European Spallation Source (ESS), the next-generation neutron source facility, is expected to produce an immense amount of data. Various working groups mostly associated with the EU project BrightnESS aim at developing solutions for its data-intensive challenges. The real-time data management and aggregation is among the top priorities. The Apache KAFKA framework will be the base for ESS real-time distributed data streaming. One of the major challenges is the simulation of data streams from experimental data generation to data analysis and storage. This presentation outlines a simulation approach based on the DonkiOrchestra data acquisition and experiment control framework, re-purposed as a data streaming simulation system compatible with the ESS-KAFKA infrastructure.

Funding:Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.This paper describes the development and use of an HTTP services architecture for building controls applications within the BNL Collider-Accelerator department. Instead of binding application services (access to live, database, and archived data, etc.) into monolithic applications using libraries written in C++ or Java, this new method moves those services onto networked processes that communicate with the core applications using the HTTP protocol and a RESTful interface. This allows applications to be built for a variety of different environments, including web browsers and mobile devices, without the need to rewrite existing library code that has been built and tested over many years. Making these HTTP services available via a reverse proxy server (NGINX) adds additional flexibility and security. This paper presents implementation details, pros and cons to this approach, and expected future directions.

A temperature and humidity measurement system at the KEK injector linac consists of 26 data loggers connected to around 700 temperature and humidity sensors, one EPICS IOC, and CSS archiver. CSS archiver engine retrieves the temperature and humidity data measured by the data loggers via Ethernet. These data are finally stored into the PostgreSQL based database. A new server computer has been recently utilized for the archiver of CSS version 4 instead of version 3. It can drastically improve the speed performance for retrieving the archived data. The long-term beam stability of linac is getting a quite important figure of merit since the simultaneous top up injection is required for the independent four storage rings toward the SuperKEKB Phase II operation. For this reason, we developed a new archiver data management application with a good operability. Since it can bring the operators a quick detection of anomalous behavior of temperature and humidity data resulting in the deterioration of beam quality, the improved temperature and humidity measurement system can be much effective. We will report the detailed system description and practical application to the daily beam operation.

DISCOS [*] is a control system developed by the Italian National Institute for Astrophysics (INAF) and currently in use at three radio telescope facilities of Medicina, Noto and the Sardinia Radio Telescope (SRT) [**]. DISCOS development is based on the adoption of the ALMA Common Software (ACS) framework. During the last two years, besides assisting the astronomical commissioning of the newly-built SRT and enabling its early science program, the control system has undergone some major upgrades. The long-awaited transition to a recent ACS version was performed, migrating the whole code base to 64 bit operative system and compilers, addressing the obsolescence problem that was causing a major technical debt to the project. This opportunity allowed us to perform some refactoring, in order to implement improved logging and resource management. During this transition the code management platform was migrated to a git-based versioning system and the continuous integration platform was modified to accommodate these changes. Further upgrades included the system completion at Noto and the expansion to handle new digital backends.*Orlati A. et al. Design Strategies in the Development of the Italian Single-dish Control System, ICALEPCS 2015 **Bolli P. et al. SRT: General Description, Technical Commissioning and First Light

The paper describes planning and execution of large-scale maintenance campaigns of SCADA systems for CERN accelerator and technical infrastructure. These activities, required to keep up with the pace of development of the controlled systems and rapid evolution of software, are constrained by many factors, such as availability for operation and planned interventions on equipment. Experience gathered throughout the past ten years of maintenance campaigns for the SCADA Applications Service at CERN, covering over 230 systems distributed across almost 120 servers, is presented. Further improvements for the procedures and tools are proposed to adapt to the increasing number of applications in the service and reduce maintenance effort and required downtime.

With the advent of LCLS-II, SLAC must effectively and collectively plan for operation of its premiere scientific production facility. LCLS-II presents unique new challenges for SLAC, with its electron beam rate of up to 1MHz, complex bunch patterns, and multiple beam destinations. These machine advancements, along with long-term goals for automated tuning, model dependent and independent analysis, and machine learning provide strong motivation to enhance the SLAC software toolkit based on augmenting EPICS V3 to take full advantage of EPICS V4 - which supports structured data and facilitates a language-agnostic middle-ware service layer. The software platform upgrade path in support of controls, online physics and experimental facilities software for the LCLS-I/II complex is described.

Control system of the 1.5 GeV Taiwan Light Source was working near 25 years. The TLS control system is a proprietary design. Limited resource allocation prevent major revise impossible. It was performed minor upgrade several times to avoid obsolete of some system components and keep up-to-date since its delivery. To avoid obsolete of some system components and keep up-to-date, various minor updates were performed during these days. These efforts allow new devices installed, obsoleted parts replacement, add new software components and functionality. Strategic and efforts will summary in this report.

Funding:Centro Científico Tecnológico de Valparaíso (CONICYT FB-0821) Advanced Center for Electrical and Electronic Engineering (CONICYT FB-0008)The ALMA Common Software (ACS) is a distributed framework used for control of astronomical observatories, which is built and deployed using roughly the same tools available at its design stage. Due to a shallow and rigid dependency management, the strong modularity principle of the framework cannot be exploited for packaging, installation and deployment. Moreover, life-cycle control of its components does not comply with standardized system-based mechanisms. These problems are shared by other instrument-based distributed systems. The new high-availability requirements of modern projects, such as the Cherenkov Telescope Array, tend to be implemented as new software features due to these problems, rather than using off-the-shelf and well-tested platform-based technologies. We present a general solution for high availability strongly-based on system services and proper packaging. We use RPM Packaging, oVirt and Docker as the infrastructure managers, Pacemaker as the software resource orchestrator and life-cycle process control through Systemd. A prototype for ACS was developed to handle its services and containers.

Funding:The Key Fund for Outstanding Youth Talent of Anhui Educational Commission of China(NO. 2013SQRL099ZD)China Fusion Engineering Test Reactor (CFETR) is superconducting Tokamak device which is next-generation engineering reactor between ITER and DEMO. It is now being designed by China national integration design group. In the present design, its magnet system consists of 16 Toroidal Field (TF) coils, 6 Center Solenoid (CS) coils and 8 Poloidal Field (PF) coils. A helium refrigerator with an equivalent cooling capacity of 5kW at 4.5K for CFETR TF coil test facility is proposed. It can provide 3.7K & 4.5K supercritical helium for TF coil, 50K cold helium with a 10g/s flow rate for High Temperature superconducting (HTS) current leads and 50K cold helium with a cooling capacity of 1.5kW for thermal shield. This paper presents the conceptual design of cryogenic control system for CFETR TF coil test including of architecture, hardware design and software development.

L.R. Brederode, L. Van den Heever
SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa

The MeerKAT radio telescope is currently in full production in South Africa's Karoo region and will be the largest and most sensitive radio telescope array in the centimeter wavelength regime in the southern skies until the SKA1 MID telescope is operational. This paper identifies the key telescope specifications, discusses the high-level architecture and current progress to meet the specifications. The MeerKAT Control and Monitoring subsystem is an integral component of the MeerKAT telescope that orchestrates all other subsystems and facilitates telescope level integration and verification. This paper elaborates on the development plan, processes and roll-out status of this vital component.

The LCLS-II facility currently under construction at SLAC will be capable of delivering an electron beam at a rate of up to almost 1MHz. The BPM system (and other diagnostics) are required to acquire time-stamped readings for each individual bunch. The high rate mandates that the processing algorithms as well as data exchange with other high-performance systems such as MPS (machine-protection system) or bunch-length monitors are implemented with FPGA technology. Our BPM-processing firmware builds on top of the SLAC "common-platform" [*] and integrates tightly with core services provided by the platform such as timing, data-buffering and communication channels.* "The SLAC Common-Platform Firmware for High-Performance Systems"; submission #3014 to ICALEPCS 2017.

A new general purpose data acquisition and control board (Board51) is presented in this paper. Board51 has primarily been developed for use in the ALICE experiment at CERN, but its open design allows for a wide use in any application requiring flexible and affordable data acquisition system. It provides analog I/O functionalities and is equipped with software bundle, allowing for easy integration into the SCADA. Based on the Silicon Labs C8051F350 MCU, the board features a fully-differential 24-bit ADC that provides an ability to perform very precise DAQ at sampling rate up to 1kHz. For analog outputs two 8-bit current-mode DACs can be used. Board51 is equipped with UART to USB interface that allows communication with any computer platform. As a result the board can be controlled through the DIM system. This is provided by a program running on a computer publishing services that include measured analog values of each ADC channel and accepts commands for setting ADC readout rate and DACs voltage. Digital inputs/outputs are also accessible using the DIM communication system. These services enable any computer on a common network to read measured values and control the board.

The Square Kilometre Array (SKA) project aims to build a large radio telescope consisting of multiple dishes and dipoles, in South Africa (SKA1-Mid) and Australia (SKA1-Low) respectively. The Synchronization and Timing (SAT) system of SKA provides frequency and clock signals from a central clock ensemble to all elements of the radio telescope, critical to the functionality of SKA acting as a unified large telescope using interferometry. The local monitor and control system for SAT (SAT. LMC) will monitor and control the working of the SAT system consisting of the timescale generation system, the frequency distribution system and the timing distribution system. SAT. LMC will also enable Telescope Manager (TM) to perform any SAT maintenance and operations. As part of Critical Design Review, SAT. LMC is getting close to submitting its final architecture and design. This paper discusses the architecture, technology, and the outcomes of prototyping activities.

In addition to the large LHC experiments, CERN hosts a number of other experimental areas with a rich research program ranging from fundamental physics to medical applications. The risk assessments have shown a large palette of potential hazards (radiological, electrical, chemical, laser, etc.) that need to be properly mitigated in order to ensure the safety of personnel working inside these areas. A Personnel Protection System, typically, accomplishes this goal by implementing a certain number of heterogeneous functionalities as interlocks of critical elements, management of a local HMI, data monitoring and interfacing with RFID badge readers. Given those requirements, reducing system complexity and costs are key parameters to be optimized in the solution. This paper is aimed at summarizing the findings, in terms of costs, complexity and maintenance reduction, offered by a technology from National Instruments® based on cRIO controllers and a new series of SIL-2 certified safety I/O modules. A use case based on a service for the protection of Class 4 laser laboratories will be described in detail.

The CMS ECAL Detector Control System (DCS) features several monitoring mechanisms able to react and perform automatic actions based on pre-defined action matrices. The DCS is capable of early detection of anomalies inside the ECAL and on its off-detector support systems, triggering automatic actions to mitigate the impact of these events and preventing them from escalating to the safety system. The treatment of such events by the DCS allows for a faster recovery process, better understanding of the development of issues, and in most cases, actions with higher granularity than the safety system. This paper presents the details of the DCS automatic action mechanisms, as well as their evolution based on several years of CMS ECAL operations.

Funding:NRC, WD, NSERC, CIHR, University of Saskatchewan, Government of Saskatchewan, and CFIAt the Canadian Light Source (CLS) synchrotron, the addition of the Quantum Materials Spectroscopy Centre (QMSC) beamline requires the addition of an Elliptically Polarizing Undulator (EPU) insertion device to produce photons from the stored electron beam. Unlike the majority of such insertion devices, this EPU is capable of producing photons of simultaneous arbitrary elliptical and linear phases, in addition to a range of energies. This EPU is also capable of creating perturbations of the stored electron beam sufficient to cause an interruption of an injection. In order to prevent this, compensation controls have been developed. These controls are accomplished with a combination of Experimental Physics and Industrial Control System (EPICS), mathematical models, and algorithms written in C and MATLAB.

Control systems of neutron instruments are responsible for the movement of a variety of mechanical axes. In the TANGO based control systems developed by Forschungszentrum Jülich for neutron instruments, Siemens S7-300 PLCs with single axis stepper motor controllers from Siemens or Phytron have been used for this purpose in the past. Synchronous coordinated movement of several axes has been implemented with dedicated 4-axes NC modules (FM357) for the S7-300. In future, the recent S7-1500 PLC family shall be used for motion tasks. With the S7-1500, stepper motor control is possible with low-cost fast digital outputs, so called PTOs (pulse trade outputs). The integrated motion functions of the S7-1500 directly support synchronous movement. The function block interface defined by PLCopen serves as a homogeneous programming interface which is independent of a specific motion controller. For the single crystal diffractometer HEiDi at the research reactor FRM-II a replacement for a S7-300 with FM357 has been implemented based on a S7-1500 PLCs and a PTO module.

The Brazilian Synchrotron Light Laboratory (LNLS) is in the final stages of developing an open-source BPM system for Sirius, a 4th-generation synchrotron light source under construction in Brazil. The system is based on the MicroTCA.4 standard comprising AMC FPGA boards carrying FMC digitizers and a CPU module. The software is built with the HALCS framework [1] and employs a service- oriented architecture (SOA) to export a flexible interface between the gateware modules and its clients, providing a set of loosely-coupled components favoring reusability, extensibility and maintainability. In this paper, the BPM system will be discussed in detail focusing on how specific functionalities of the system are integrated and developed in the framework to provide SOA services. In particular, two domains will be covered: (i) gateware modules, such as the ADC interface, acquisition engine and digital signal processing; (ii) software services counterparts, showing how these modules can interact with each other in a uniform way, easing integration with control systems.[1] L.M. Russo, J.V. Ferreira Filho, "Gateware and Software Frameworks for Sirius BPM Electronics", PCaPAC16, paper THDAPLCO03.

Inspired by the recent developments of reactive programming and the ubiquity of the concept of streams in modern software industry, we assess the relevance of a reactive streams solution in the context of accelerator controls. The promise of reactive streams, to govern the exchange of data across asynchronous boundaries at a rate sustainable for both the sender and the receiver, is alluring to most data-centric processes of CERN's accelerators. Taking advantage of the renovation of one key software piece of our supervision layer, the Beam Interlock System GUI, we look at the architecture, design and implementation of a reactive streams based solution. Additionally, we see how this model allows us to re-use components and contributes naturally to the extension of our tool set. Lastly, we detail what hindered our progression and how our solution can be taken further.

Today's front-end controllers, which are widely used in CERNs controls environment, feature CPUs with high clock frequencies and extensive memory storage. Their specifications are comparable to low-end servers, or even smartphones. The Java Virtual Machine (JVM) has been running on similar configurations for years now and it seems natural to evaluate the behaviour of JVMs on this environment to characterize if Firm or Soft real-time constraints can be addressed efficiently. Using Java at this low-level offers the opportunity to refactor CERNs current implementation of the device/property model and to move away from a monolithic architecture to a promising and scalable separation of the area of concerns, where the front-end may publish raw data that other layers would decode and re-publish. This paper presents first the evaluation of Machine Protection control system requirements in terms of real-time constraints and a comparison of the performance of different JVMs regarding these constraints. In a second part, it will detail the efforts towards a first prototype of a minimal RT Java supervision layer to provide access to the hardware layer.

Over the past years, SOLEIL* uses SIEMENS PLCs** as a standard for signal monitoring and security. SOLEIL is today thinking about a major upgrade of the facilities, and has to adapt its organization to face efficient operation and R&D. In this context, automation experts are now merged in a single group. In a middle term, migration from the existing 3XX series PLCs to the new 15XX series will be necessary. As the new 15XX series PLCs do not support Fetch/Write protocol anymore, a first step is the upgrade of TANGO*** PLCServer. This software device ensures data exchange with supervisory applications using TANGO infrastructure. It opens multiple TCP/IP connections to the PLC hardware, manages asynchronous communication to read/write PLC Datablocks and acts as a server for other clients. The upgrade of PLCServer is based on Snap7**** open source Ethernet communication suite for interfacing with Siemens PLCs using the S7 native protocol. This paper details the evolutions, performances and limitations of this new version of the PLCServer.*French synchrotron light facility **Programmable Logic Controller ***Toolkit for distributed control systems, supervisory and data acquisition (www.tango-controls.org) ****snap7.sourceforge.net

Model checking is a formal verification technique to check given properties of models, designs or programs with mathematical precision. Due to its high knowledge and resource demand, the use of model checking is restricted mainly to core parts of highly critical systems. However, we and many other authors have argued that automated model checking of PLC programs is feasible and beneficial in practice. In this paper we aim to explain why model checking is applicable to PLC programs even though its use for software in general is too difficult. We present an overview of the particularities of PLC programs which influence the feasibility and complexity of their model checking. Furthermore, we list the main challenges in this domain and the solutions proposed in previous works.

The large number of industrial control systems based on PLCs (Programmable Logic Controllers) available at CERN implies a huge number of programs and lines of code. The software quality assurance becomes a key point to ensure the reliability of the control systems. Static code analysis is a relatively easy-to-use, simple way to find potential faults or error-prone parts in the source code. While static code analysis is widely used for general purpose programming languages (e.g. Java, C), this is not the case for PLC programs. We have analyzed the possibilities and the gains to be expected from applying static analysis to the PLC code used at CERN, based on the UNICOS framework. This paper reports on our experience with the method and the available tools and sketches an outline for future work to make this analysis method practically applicable.

The development of critical systems requires the application of verification techniques in order to guarantee that the requirements are met in the system. Standards like IEC 61508 provide guidelines and recommend the use of formal methods for that purpose. The ITER Interlock Control System has been designed to protect the tokamak and its auxiliary systems from failures of the components or incorrect machine operation. ITER has developed a method to assure that some critical operator commands have been correctly received and executed in the PLC (Programmable Logic Controller). The implementation of the method in a PLC program is a critical part of the interlock system. A methodology designed at CERN has been applied to verify this PLC program. The methodology is the result of 5 years of research in the applicability of model checking to PLC programs. A proof-of-concept tool called PLCverif implements this methodology. This paper presents the challenges and results of the ongoing collaboration between CERN and ITER on formal verification of critical PLC programs.

B. Xaia, T. Gatsi, O.J. Mokone
SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa

Funding:SKA (SA) - National Research Foundation (NRF)The 64-dish MeerKAT radio telescope, under construction in South Africa, will become the largest and most sensitive radio telescope in the Southern Hemisphere until integrated with the Square Kilometre Array (SKA). Software testing is an integral part of software development that is aimed at evaluating software quality; verifying and validating that the given requirements are met. This poster will present the approach, techniques and tools used to automate the testing of the software that controls and monitors the telescope. Jenkins continuous integration system is the server used to run the automated tests together with Git and Docker as the supporting tools to the process. In addition to the aforementioned tools we also use an Automated Qualification Framework (AQF) which is an in-house developed software that automates as much as possible of the functional testing of the Control and Monitoring (CAM) software. The AQF is invoked from Jenkins by launching a fully simulated CAM system and executing the Integrated CAM Tests against this simulated system as CAM Regression Testing. The advantages and limitations of the automated testing will be elaborated in the paper in detail.

The web continues to grow as an application platform, with accessibility and platform independence as major benefits. It also makes it possible to tie services together in new ways through simple APIs. At MAX IV we are using web services for various purposes related to the control system, for example, monitoring servers and services, accessing alarm history, viewing control system status, managing system and users logs and running recurring jobs. Furthermore, all user management is also accessed via web applications, and even data analysis and experiment control can now be performed via web based interfaces. We make an effort to use existing tools whenever possible (e.g. Kibana, Prometheus), and otherwise develop systems in-house, based on current well established libraries and standards, such as JavaScript, Python, Apache, etc. This paper presents an overview of our activities in the field and describes different architectural decisions taken.

The ELI Beamlines facility is a Petawatt laser facility in the final construction and commissioning phase in Prague, Czech Republic. The central control system operates and controls complex subsystems (lasers, beam transport, beamlines, experiments, facility systems, safety systems) with huge ammount of devices and computers. Therefore standards for software development were established: - Model based development - Standard approach to user interfaces - Standard approaches to device interfaces - Third party envirnment interfaces TANGO framework was choosen for communication in distributed control system environment.

The Front-End Software Architecture (FESA) framework is the basis for most real-time software development for accelerator control at CERN. FESA designs are defined in an XML document which is validated against a schema to enforce framework constraints, and are used to automatically generate C++ boilerplate code in which the developer can then implement specific code. Design files can rapidly grow in complexity making the overview of the resulting system almost impossible to understand. One way to overcome this is to benefit from a graph-based representation of the design, with XML fragments summarized into logical blocks and association between the blocks depicted by arrows. As the intricacy of the graph is analogous to a potential complex design, it is also essential to provide an interactive Graphical User Interface (GUI) for parameterising and editing the graph generation in order to fine-tune a simpler and cleaner illustration of a FESA design. This paper describes such a GUI (FESA Graph Editor) and outlines how it benefits the design and documentation process of the FESA-design-document.

Until recently, Java GUI development in the CERN Beam Instrumentation Group has followed an ad-hoc approach despite several attempts to provide frameworks and coding standards. Triggered by the deprecation of Java's Swing toolkit, the JavaFX toolkit has been adopted for the creation of new GUIs, and is foreseen for future migration of Swing-based GUIs. To increase homogenisation and encourage modular coding of JavaFX GUIs, libraries have been developed to standardise accelerator context selection, provide inter-component GUI communication and optimise data streaming between the control system and modules that make up an expert GUI. This paper describes how this has allowed the use of model-view-controller techniques and naming conventions via Maven archetypes. It also details the modernisation of the software delivery process and subsequent renovation of the software portal. Finally, the paper outlines a vision to extend the principles applied to this Java GUI development for future Python-based developments.

Automation plays a key role in the macromolecular crystallography (MX) beamlines at Diamond Light Source (DLS). This is particularly evident with sample exchange; where fast, reliable, and accurate handling is required to ensure high quality and high throughput data collection. This paper looks at the design, build, and integration of an in-house robot control system. The system was designed to improve reliability and exchange times, provide high sample storage capacity, and accommodate easy upgrade paths, whilst gaining and maintaining in-house robotics knowledge. The paper also highlights how peripheral components were brought under the control of a Programmable Logic Controller (PLC) based integration unit, including a vision system.

The Square Kilometre Array (SKA) is a global project to build a multi-purpose radio telescope that will play a major role in answering key questions in modern astrophysics and cosmology. It will be one of a small number of cornerstone observatories around the world that will provide astrophysicists and cosmologists with a transformational view of the Universe. Two major goals of the SKA is to study the history and role of neutral Hydrogen in the Universe from the dark ages to the present-day, and to employ pulsars as probes of fundamental physics. Since 2008, the global radio astronomy community has been engaged in the development of the SKA and is now nearing the end of the 'Pre-Construction' phase. This talk will give an overview of the current status of the SKA and the plans for construction, focusing on the computing and software aspects of the project.

Funding:DKIST is a facility of the National Solar Observatory funded by the National Science Foundation under a cooperative agreement with the Association of Universities for Research in Astronomy, Inc.The Daniel K. Inouye Solar Telescope (DKIST) is currently under construction on the summit of Haleakala on the island of Maui. When completed in late 2019 it will be the largest optical solar telescope in the world with a 4m clear aperture and a suite of state of the art instruments that will enable our Sun to be studied in unprecedented detail. In this paper we describe the current state of testing, commissioning and calibration of the telescope and how that is supported by the DKIST control system.

After 20 years of operation, the ESRF has embarked upon an extremely challenging project - the Extremely Brilliant Source (ESRF - EBS) . The goal of this project is to construct a 4th generation light source storage ring inside the existing 844m long tunnel. The EBS will increase the brilliance and coherence by a factor of 100 with respect to the present ESRF storage ring. A major challenge is to keep the present ring operating 24x7 while designing and pre-constructing all the elements of the new ring. This is the first time a 4th generation light source will be constructing inside an existing tunnel. This paper concentrates on the control system aspects. The control system is 100% TANGO based. The paper will list the main challenges of the new storage ring like the Hot Swap Powersupply, the new timing system, how reliable operation was maintained while modernizing the injector control system and preparing the new storage ring control system, the new historical database, and how extensive use was made of software simulators achieve this.http://www.esrf.fr/files/live/sites/www/files/about/upgrade/documentation/whitepaper-upgrade-phaseII.pdf P. Raimondi, "The ESRF Low Emittance Upgrade", IPAC'16, , Busan, Korea, May 2016, Paper WEXA01