On 21st of June 2016 the MAX IV Laboratory was inaugurated in the presence of the officials and has welcome the first external researchers to the new experimental stations. The MAX IV facility is the largest and most ambitious Swedish investment in research infrastructure and designed to be one of the brightest source of X-rays worldwide. The current achievements, progress, collaborations and vision of the facility will be described from the perspective of the control and IT systems.

Funding:This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344The National Ignition Facility (NIF) is the world's largest and most energetic laser experimental facility with 192 beams capable of delivering 1.8 megajoules of 500-terawatt ultraviolet laser energy to a target. The energy, temperatures and pressures capable of being generated on the NIF allow scientists the ability to generate conditions similar to the center of the sun and explore the physics of planetary interiors, supernovae, black holes and thermonuclear burn. This year concludes a very successful multi-year plan of optimizations to the control & information systems and operational processes to increase the quantity of experimental target shots conducted in the facility. In addition, many new system control and diagnostic capabilities have been commissioned for operational use to maximize the scientific value produced. With NIF expecting to be operational for greater than 20 years focus has also been placed on optimizing the software processes to improve the sustainability of the control system. This talk will report on the current status of each of these areas in support of the wide variety of experiments being conducted in the facility.Release No.: LLNL-ABS-727237-DRAFT

MADOCA II (Message and Database Oriented Control Architecture II) is next generation of MADOCA and was developed to fulfill current and future requirements in accelerator and beamline control at SPring-8. In this paper, we report on the recent evolution in MADOCA II for data collection, which was missing in the past reports at ICALEPCS *,**. In MADOCA, the biggest challenge in data collection was to manage signals into Parameter Database smoothly. Users request Signal Registration Table (SRT) for new data collection. However, this costed time and manpower due to typo in SRT and iteration in DB registration. In MADOCA II, we facilitated signal registration scheme with prior test of data collection and validity check in SRT with web-based user interface. Data collection framework itself was also extended to manage various data collection types in SPring-8 with a unified method. All of data collection methods (polling, event type), data format (such as point and waveform data) and platform (Unix, Embedded, Windows including LabVIEW) can be flexibly managed. We started to implement MADOCA II data collection into SPring-8 with 241 hosts and confirmed stable operation since April 2016.* T. Matsumoto et al., Proceedings of ICALEPCS 2013, p.944. ** A.Yamashita et al., Proceedings of of ICALEPCS 2015, p.648

Two years ago, at the 2015 ICALEPCS conference in Melbourne Australia, we presented a paper entitled 'Replacing The Engine In Your Car While You Are Still Driving It*'. In that paper we described the mid-point of a very ambitious, multi-year, upgrade project involving the complete replacement of the low-level RF system, the timing system, the industrial I/O system, the beam-synchronized data acquisition system, the fast-protect reporting system, and much of the diagnostic equipment. That paper focused mostly on the timing system upgrade and presented several observations and recommendations from the perspective of the timing system and its interactions with the other systems. In this paper, now nearly three quarters of the way through our upgrade schedule, we will report on additional observations, challenges, recommendations, and lessons learned from some of the other involved systems.* E.Bjorklund, 'Replacing The Engine In Your Car While You Are Still Driving It', THHC2O03, Proceedings of ICALEPCS2015, Melbourne, Australia (2015)

The current cavern ventilation control system of the CMS experiment at CERN is based on components which are already obsolete: the SCADA system, or close to the end of life: the PLCs. The control system is going to be upgraded during the CERN Long Shutdown 2 (2019-2020) and will be based on the CERN industrial control standard: UNICOS employing WinCC OA as SCADA and Schneider PLCs. Due to the critical nature of the CMS ventilation installation and the short allowed downtime, the approach was to design an environment based on the virtual commissioning of the new control. This solution uses a first principles model of the ventilation system to simulate the real process. The model was developed with the modelling and simulation software EcosimPro. In addition, the current control application of the cavern ventilation will also be re-engineered as it is not completely satisfactory in some transients where many sequences are performed manually and some pressure fluctuations observed could potentially cause issues to the CMS detector. The plant model will also be used to validate new regulation schemes and transient sequences offline in order to ensure a smooth operation in production.

For reliable accelerator operation, it is essential to have a centralized data handling scheme, for such as unique equipment ID's, archive and online data from sensors, and operation points and calibration parameters those are to be restored upon a change in operation mode. Since 1996, when SPring-8 got in operation, a database system has been utilized for this role. However, as time passes the original design got shorthanded and new features equipped upon requests pushed up maintenance costs. For example, as SACLA started in 2010, we introduced a new data format for the shot by shot synchronized data. Also number of tables storing operation points and calibrations increased with various formats. Facing onto the upgrade project at the site*, it is the time to overhaul the whole scheme. In the plan, SACLA will be the high quality injector to a new storage ring while in operation as the XFEL user machine. To handle shot by shot multiple operation patterns, we plan to introduce a new scheme where multiple tables inherits a common parent table information. In this paper, we report the database design for the upgrade project and status of transition.* http://rsc.riken.jp/pdf/SPring-8-II.pdf

Funding:Carl Zeiss FoundationThe FAIR control system (CS) is an alarm-based design and employs White Rabbit time synchronization over a GbE network to issue commands executed accurate to 1 ns. In such a network based CS, graphs of possible machine command sequences are specified in advance by physics frameworks. The actual traffic pattern, however, is determined at runtime, depending on interlocks and beam requests from experiments and accelerators. In 'unlucky' combinations, large packet bursts can delay commands beyond their deadline, potentially causing emergency shutdowns. Thus, prior verification if any possible combination of given command sequences can be delivered on time is vital to guarantee deterministic behavior of the CS. Deterministic network calculus (DNC) can derive upper bounds on message delivery latencies. This paper presents an approach for calculating worst-case descriptors of runtime traffic patterns. These so-called arrival curves are deduced from specified partial traffic sequences and are used to calculate end-to-end traffic properties. With the arrival curves and a DNC model of the FAIR CS network, a worst-case latency for specific packet flows or the whole CS can be obtained.

At CERN there are over 600 different industrial control systems with millions of deployed sensors and actuators and their monitoring represents a challenging and complex task. This paper describes three different mathematical approaches that have been designed and developed to detect anomalies in CERN control systems. Specifically, one of these algorithms is purely based on expert knowledge while the other two mine historical data to create a simple model of the system, which is then used to detect anomalies. The methods presented can be categorized as dynamic unsupervised anomaly detection; "dynamic" since the behaviour of the system is changing in time, "unsupervised" because they predict faults without reference to prior events. Consistent deviations from the historical evolution can be seen as warning signs of a possible future anomaly that system experts or operators need to check. The paper also presents some results, obtained from the analysis of the LHC Cryogenic system. Finally the paper briefly describes the deployment of Spark and Hadoop into the CERN environment to deal with huge datasets and to spread the computational load of the analysis across multiple nodes.

We built an EPICS-based radiation therapy machine control system, and are using it to treat patients at our hospital. To help ensure safety, we use a restricted subset of EPICS constructs and programming techniques, and developed several new automated formal verification tools for them. The Symbolic Evaluator checks properties of EPICS database programs (applications), using symbolic evaluation and satisfiability checking. It found serious errors in our control program that were missed by reviews and testing. Other tools are based on a formal semantics for database records, derived from EPICS documentation and expressed in the specification language of an automated theorem prover. The Verified Interpreter is a re-implementation of the parts of the database engine we use, which is proved correct against the formal semantics. We used it to check those parts of EPICS core by differential testing. It found no significant errors (differences between EPICS behavior and the formal semantics). A Verified Compiler is in development. It will compile a database to a standalone program that does not use EPICS core, where the machine code is verified to conform to the formal semantics.

The ALICE Detector Control System (DCS) provides its services to the experiment for 10 years. It ensures uninterrupted operation of the experiment and guarantees stable conditions for the data taking. The decision to extend the lifetime of the experiment requires the redesign of the DCS data flow. The interaction rates of the LHC in ALICE during the RUN3 period will increase by a factor of 100. The detector readout will be upgraded and it will provide 3.4TBytes/s of data, carried by 10 000 optical links to a first level processing farm consisting of 1 500 computer nodes and ~100 000 CPU cores. A compressed volume of 20GByte/s will be transferred to the computing GRID facilities. The detector conditions, consisting of about 100 000 parameters, acquired by the DCS need to be merged with the primary data stream and transmitted to the first level farm every 50ms. This requirement results in an increase of the DCS data publishing rate by a factor of 5000. The new system does not allow for any DCS downtime during the data taking, nor for data retrofitting. Redundancy, proactive monitoring, and improved quality checking must therefore complement the data flow redesign.

Tokamak using superconducting magnets is becoming more and more important as long pulse operation and the ability to confine high temperature and density plasma to the interlock system to protect the device. KSTAR achieved H-mode operation for 70 seconds in 2016. In this case, it is necessary to have precise and fast operation protection device to protect Plasma Facing Component from high energy and long pulse plasma. The higher the energy of the plasma, the faster the protection device is needed, and the accurate protection logic must be realized through the high-speed operation using signals from various devices. To meet these requirements, KSTAR implemented the Fast Interlock System using Compact RIO. Implementation of protection logic is performed in FPGA, so it can process fast and various input and output. The EPICS IOC performs communication with peripheral devices, CRIO control, and DAQ. The hard-wired signal for high-speed operation from peripheral devices is directly connected to the CRIO. In this paper, we describe the detailed implementation of the FIS and the results of the fast interlock operation in the actual KSTAR operation, as well as future plans.

The J-PARC MR's Machine Protection System (MR-MPS) was introduced from the start of beam operation in 2008. Since then, MR-MPS has contributed to the improvement of safety including stable operation of the accelerator and the experiment facilities. The present MR-MPS needs to be reviewed from the aspects such as increase of connected equipment, addition of power supply building, flexible beam abort processing, module uniqueness, service life etc. In this paper, we show the performance of MR-MPS and show future consideration of upgrade.

The European XFEL is a 3.4km long X-ray Free Electron Laser. The accelerating structure consists of 96 cryo modules running at 1.3 GHz with 10 Hz repetition rate. The injector adds two modules running at 1.3 and 3.9 GHz respectively. The cryo modules are operated at 2 Kelvin. Cold compressors (CCs) pump down the liquid Helium to 30 mbar which corresponds to 2 Kelvin. Stable conditions in the cryogenic system are mandatory for successful accelerator operations. Pressure fluctuations at 2 K may cause detuning of cavities and could result in unstable CC operations. The RF losses in the cavities may be compensated by reducing the heater power in the liquid Helium baths of the nine cryogenic strings. This requires a stable readout of the current RF settings. The detailed signals are read out from several severs in the accelerator control system and then computed in the cryogenic control system for heater compensation. This paper will describe the commissioning of the cryogenic control system, the communication between the control systems involved and first results of machine operations with the heat loss compensation in place.

Funding:The Swedish Research Council (VetenskapsrÃ¥det MAX IV / SOLEIL collaboration) The Ile de France region (project <FORTE>, DIM-Oxymore)Two years ago, SOLEIL (France) and MAXIV(Sweden) synchrotron light sources started a joint project to partially fund two similar in-vacuum diffractometers to be installed at the tender X-ray beamlines SIRIUS and FemtoMAX . SOLEIL diffractometer, manufactured by the French company SYMETRIE* and complementarily funded by a <Ile de France> region project (DIM Oxymore) gathering SIRIUS beamline and other laboratories, features an in-vacuum 4-circles goniometer and two hexapods. The first hexapod is used for the alignment of the vacuum vessel, and the second one for the alignment of the sample stage which is mounted on the 4-circles diffractometer. In order to integrate efficiently this complex mechanical experimental station into SOLEIL control architecture based on TANGO and DeltaTau motion controller, SOLEIL and SYMETRIE work in a close collaboration. Synchronization of the different elements of the diffractometer is a key issue in this work to get a good sphere of confusion thanks to corrections done by the in vacuum hexapod. This paper details this collaboration, status of the project in terms of control system capabilities and the results of the first tests.*SYMETRIE Company (Hexapod and positioning systems) http://www.symetrie.fr/

Lanzhou All Permanent magnet ECR ion source No.2 (LAPECR2) is the ion source for 320 kV multidiscipline research platform for highly charged ions. Its old control system has been used for nearly 12 years and some prob-lems have been gradually exposed and affected its daily operation. A set of PLC from Beckhoff company is in charge of the control of magnet power supplies, diagnos-tics and motion control. EPICS and Control System Studio (CSS) as well other packages are used in this facility as the control software toolkit. Based on these state-of-the-art technologies on both hardware and software, this paper designed and implemented a new control system for LAPECR2. After about half a year of running, the new control reflects its validity and stability in this facility.

CERNs Accelerator Fault Tracking (AFT) system aims to facilitate answering questions like: "Why are we not doing Physics when we should be?" and "What can we do to increase machine availability?" People have tracked faults for many years, using numerous, diverse, distributed and un-related systems. As a result, and despite a lot of effort, it has been difficult to get a clear and consistent overview of what is going on, where the problems are, how long they last for, and what is the impact. This is particularly true for the LHC, where faults may induce long recovery times after being fixed. The AFT project was launched in February 2014 as collaboration between the Controls and Operations groups with stakeholders from the LHC Availability Working Group (AWG). The AFT system has been used successfully in operation for LHC since 2015, yielding a lot of attention and generating a growing user community. In 2017 the scope has been extended to cover the entire Injector Complex. This paper will describe the AFT system and the way it is used in terms of architecture, features, user communities, workflows and added value for the organisation.

RIKEN Radioactive Isotope Beam Factory (RIBF) is a cyclotron-based heavy-ion accelerator facility for producing unstable nuclei and studying their properties. Many components of the RIBF accelerator complex are controlled by using the Experimental Physics and Industrial Control System (EPICS). We will here present the overview of the EPICS-based RIBF control system and its latest update work in progress. We are developing a new beam interlock system from scratch for applying to some of the small experimental facility in the RIBF accelerator complex. The new beam interlock system is based on a programmable logic controller (PLC) as well as the existing beam interlock system of RIBF (BIS), however, we newly employ a Linux-based PLC-CPU on which EPICS programs can be executed in addition to a sequencer in order to speed up the system. After optimize the performance of the system while continuing operation, we plan to expand the new system as a successor to the BIS that has been working more than 10 years since the start of its operation.

SKA (Square Kilometer Array) is a project aimed to build a very large radio-telescope, composed by thousands of antennae and related support systems. The overall orchestration is performed by the Telescope Manager (TM), a suite of software applications. In order to ensure the proper and uninterrupted operation of TM, a local monitoring and control system is developed, called TM Services. Fault Management (FM) is one of these services, and is composed by processes and infrastructure associated with detecting, diagnosing and fixing faults, and finally returning to normal operations. The aim of the study, introducing artificial intelligence algorithms during the detection phase, is to build a predictive model, based on the history and statistics of the system, in order to perform trend analysis and failure prediction. Based on monitoring data and health status detected by the software system monitor and on log files gathered by the ELK (Elasticsearch, Logstash, and Kibana) server, the predictive model ensures that the system is operating within its normal operating parameters and takes corrective actions in case of failure.

TLS (Taiwan light Source) is a 1.5 GeV synchrotron light source at NSRRC which has been operating for users more than twenty year. There are many toolkits that are delivered to find out downtime responsibility and processing solution. New alarm system with EPICS interface is also applied in these toolkits to keep from machine fail of user time in advance. These toolkits are tested and modified in the TLS and enhance beam availability. The relative operation experiences will be migrated to TPS (Taiwan photon source) in the future after long term operation and big data statistic. These analysis and implement results of system will be reported in this conference.

Funding:Work supported by the German Bundesministerium für Bildung und Forschung, Land Berlin and grants of Helmholtz Association.The 1.7GeV light source BESSY II features about 50 beamlines overbooked by a factor of 2 on the average. Thus availability of high quality synchrotron radiation (SR) is a central asset. SR users at BESSY II can base their beam time expectations on numbers generated according to the common operation metrics*. Major failures of the facility are analyzed according to * and displayed in real time, analysis of minor detriments are provided regularly by off line tools. Many operational constituents are required for extraordinary availability figures: meaningful alarming and dissemination of notifications, complete logging of program, device, system and operator activities, post mortem analysis and data mining tools. Preventive and corrective actions are enabled by consequent root cause analysis based on accurate eLog entries, trouble ticketing and consistent failure classifications. This paper describes the tool sets, developments, their implementation status and some showcase results at BESSY II.* Common operation metrics for storage ring light sources, A. Luedeke, M. Bieler, R.H.A. Farias, S. Krecic, R. Mueller, M. Pont, and M. Takao, Phys. Rev. Accel. Beams 19, 082802

Funding:Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.Accessing database resources from Accelerator Controls servers or applications with JDBC/ODBC and other dedicated programming interfaces have been common for many years. However, availability and performance limitations of these technologies were obvious as rich web and mobile communication technologies became more mainstream. HTTP REST services have become a more reliable and common way for easy accessibility for most types of data resources, include databases. Several commercial database REST services have become available in recent years, each with their own pros and cons. This paper presents a way for setting up a generic HTTP REST database service with technology that combines the advantages of application servers (such as Glassfish), JDBC drivers, and Java technology to make major RDBMS systems easy to access and handle data in a secure way. This allows database clients to retrieve data (user data or meta data) in standard formats such as XML or JSON.

Since the beginning of computer era the storing and analyzing the data was one of the main focuses of IT systems. Therefore, it is no wonder that the users and operators of the coming FAIR complex have expressed a strong requirement to collect the data coming from different accelerator components and store it for the future analysis of the accelerator performance and its proper function. This task will be performed by the Archiving System, a component, which will be developed by FAIRs Controls team in cooperation with XLAB d.o.o., Slovenia. With more than 2000 devices, over 50000 parameters and around 30 MB of data per second to store, the Archiving System will face serious challenges in terms of performance and scalability. Besides of the actual storage complexity, the system will also need to provide the mechanisms to access the data in an efficient matter. Fortunately, there are open source products available on the market, which may be utilized to perform the given tasks. This paper presents the first conceptual design of the coming system, the challenges and choices met, as well as the integration in the coming FAIR system landscape.

Novosibirsk Free electron Laser (FEL) based on multi-turn energy recovery linac is the source of coherent radiation with ability of wavelength tuning. It involves one single-turn and one 4-turn microtron-recuperator, which are have general injection channel and acceleration section. There are three different free electron lasers, mounted on different tracks of these accelerators, and operating on different electron beam energy and have different wavelength range and power of generated radiation. Whole FEL facility is a complex physics installation, controlled by large amount of equipment of different types. Therefore, for effective control and monitor of FEL operation state and its parameters, the particularized control system was developed. In this paper the architecture, hardware, software compound parts of this control system are considered. Also main abilities, characteristics of this system and examples of its usage are presented.

A proton facility based on a superconducting cyclotron for cancer treatment is to be built by Huagong Tech Company Limeted, Wuhan, China. This facility is aimed at providing proton beams with continuously tuneable energy from 70 MeV to 250 MeV, for kinds of cancer treatments. Our team is responsible for the development of the treatment control system, which consists a number of functional modules and connects to many subsystems. In this paper, we will report our conceptual design of the treatment control system.

This article gives a brief description of the timing system for Heavy Ion Research Facility in Lanzhou- Cooler Storage Ring (HIRFL-CSR). It introduces in detail mainly of the timing system architecture, hardware and software. We use standard event system architecture. The system is mainly composed of the events generator (EVG), the events receiver (EVR) and the events fan-out module. The system is the standard three-layer structure. OPI layer realizes generated and monitoring for the events. The intermediate layer is the events transmission and fan out. Device control layer performs the interpretation of the events. We adopt our R&D EVG to generate the events of virtual accelerator. At the same time, we have used our own design events fan-out module and realize distributed on the events. In equipment control layer, we use EVR design based on FPGA to interpret the events of different equipment and achieve an orderly work. The Timing System realize the ion beam injection, acceleration and extraction.

The accelerator complex at CERN is a living system. Accelerators are being dismantled, upgraded or change their purpose. New accelerators are built. The changes do not happen overnight, but when they happen they may require profound changes across the handling systems. Central timings (CT), responsible for sequencing and synchronization of accelerators, are good examples of such systems. This paper shows how over the past twenty years the changes and new requirements influenced the evolution of the CTs. It describes experience gained from using the CBCM CT model, for strongly coupled accelerators, and how it led to a design of a new Dynamic Beam Negotiation (DBN) model for the AD and ELENA accelerators, which reduces the coupling, increasing accelerator independence. The paper ends with an idea how to merge strong points of both models in order to create a single generic system able to efficiently handle all involved CERN accelerators and provide more beam time to experiments and LHC.

Injecting beams in CERN facilities is subject to the CERN safety rules. It is for this reason that the Beam Permit approval procedure was improved by moving away from a paper-based workflow to a digital form. For each facility the Beam Permits are signed by the various responsible specialists (Access systems, safety equipment, radiation protection, etc…). To achieve this, CERN's official Engineering Data Management System (EDMS) is used. The functionality of EDMS was extended to accommodate the additional requirements, whilst keeping a user friendly web interface. In addition, a new webpage within the CERN OP-webtools site was created with the purpose of providing a visual overview of the Beam Permit status for each facility. This new system is used in the CERN Control Centre (CCC) and it allows the operations team and all people involved in the signature process to follow the Beam Permit status in a more intuitive, efficient and safer way.

The Korea Superconducting Tokamak Advanced Research (KSTAR) interlock related systems are configured with various system such as fast interlock, supervisory interlock, plasma control, central control, and heating using various types of hardware, software, and interface platforms. For each system, monitoring and analysis tools are already well-developed. However, for the analysis of system fault behavior, these heterogeneous platforms do not help finding the relation of failure. When the interlock events are latched or pulse is stopped by PCS, events are transmitted to different actuators and it could make another events via various interface. In other words, it could lead another factor of fault causes on different system. Through this application we will figure out sequence of fault factor during the pulse-by-pulse KSTAR operation. The KSTAR Data Integration System (KDIS) is configured with KSTAR event-driven architecture and data processing environment. This application will be developed on the KDIS environment and synchronized with KSTAR event. This paper will present the development of shot fault sequence analysis logic and application with KDIS.

The reliable protection of the ESS equipment is important for the success of the project. This requires multiple systems and subsystems to perform the required protection functions that prevent undesired hazardous events. The complexity of the machine, the different technical challenges and the intrinsic organisational difficulties for an in-kind project like ESS impose serious challenges to the distributed Machine Protection strategy. In this contribution, the difficulties and adopted solutions are described to exemplify the technical challenges encountered in the process.

Funding:Work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.With the success and reliability of the transverse feedback system installed at the Advance Photon Source (APS), a major upgrade to expand the system is under way. The existing system is operating at a third of the storage ring bunch capacity, or 324 of the available 1296 bunches. This upgrade will allow the sampling of all 1296 bunches and make corrections for all selected bunches in a single storage ring turn. To facilitate this upgrade a new analog I/O board capable of 352 MHz operation was developed along with a revolution clock cleaning circuit. A 352MHz clock cleaning circuit was also required for the high-speed analog output circuit to maintain data integrity to the receiving DAC unit that is 61m away. This receiving DAC unit will have its transceiver data rate upgraded from 2.3Gbps to about 7Gbps transmitted over a fiber optic link. This paper discusses some of the challenges in reducing the clock jitter from both the system P0 bunch clock and the 352MHz clock along with the necessary FPGA hardware upgrades and algorithm changes, all of which is required for the success of this upgrade.

With high intensity beams, a precise measurement and effective correction of the betatron coupling is essential for the performance of the Large Hadron Collider (LHC). In order to measure this parameter, the LHC transverse damper(ADT), used as an AC dipole, will provide the necessary beam excitation. The beam oscillations will be recorded by the Beam Positions Monitors and transmitted to dedicated analysis software. We set up the project with a 3-layer software architecture: The central node is a java server orchestrating the different actors: The Graphical User Interface, the control and triggering of the ADT AC dipole, the BPMs, the oscillation analysis (partly in python), and finally the transmission of the correction values. The whole system, is currently being developed in a team using Scrum, an iterative and incremental agile software development framework. In this paper we present an overview of this system, experience from machine development and commissioning as well as how scrum helped us to achieve our goals. Improvement and re-use of the architecture with a nice decoupling between data acquisition and data analysis are also briefly discussed.

This paper describes a new software tool recently developed at CERN called (New CPS Beam Optimiser). This application allows the automatic optimization of beam properties using a statistical method, which has been modified to suit the purpose. Tuning beams is laborious and time-consuming, therefore, to gain operational efficiency, this new method to perform an intelligent automatic scan sequence has been implemented. The application, written in JavaFX, uses CERN control group standard libraries and is quite simple. The GUI is user-friendly and allows operators to configure different optimization processes in a dynamic and easy way. Different measurements, complemented by simulations, have therefore been performed to try and understand the response of the algorithm. These results are presented here, along with the modifications still needed in the original mathematical libraries.

Funding:This work was supported by JSPS KAKENHI Grant Number 26800153.The Hadron Experimental Facility is designed to handle an intense slow-extracted proton beam from the 30-GeV Main Ring of the Japan Proton Accelerator Research Complex (J-PARC). We have developed a new control system of a magnet power supply to work with a Programmable Logic Controller (PLC). The control PLC handles the status of the interlock signals between a power supply and a magnet, and monitors the output voltage and the current. The PLC also controls a programmable reference voltage to regulate the output current. In addition, we have been developing an automatic orbit-correction program with the control system of the magnet power supply. The previous data of the beam profile monitors located on the upstream side of the beam dump and the temperature distribution on the beam dump show a possibility of the automatic correction of the beam orbit to the beam dump. The optimized current for the horizontal steering magnet was calculated from the horizontal displacement of the proton beam measured with the beam profile monitors. This paper reports the current status of the power supply control system which can automatically correct the horizontal beam position at the beam dump.

High intensity hadron colliders and fixed target experiments at CERN require an increasing amount of robotic tele-manipulation interventions to prevent and reduce excessive exposure of maintenance personnel to the radioactive environment. Tele-manipulation tasks are often required on dated radioactive devices which were not conceived to be maintained and handled using standard one arm robotic solutions. Robotic platforms with a level of dexterity that often requires using two robotic arms with a minimum of six degrees of freedom are instead needed for these purposes. In this paper, the control of a novel robust robotic platform able to host and to carry safely a dual-arms robotic system is presented. The arms and the vehicle controls are fully integrated in order to guarantee simplicity to the operators during the realization of the robotic tasks. A novel high-level control architecture for the new robot is shown, as well as a novel low-level safety layer for anti-collision and recovery scenarios. Preliminary results of the system commissioning are presented using CERN accelerator facilities as a use case.

A C/C++ software improvement process (SIP4C/C++) has been increasingly applied by the CERN accelerator Controls group since 2011, addressing technical and cultural aspects of our software development work. A first paper was presented at ICALEPCS 2013*. On the technical side, a number of off-the-shelf software products have been deployed and integrated, including Atlassian Crucible (code review), Google test (unit test), Valgrind (memory profiling) and SonarQube (static code analysis). Likewise, certain in-house developments are now operational such as a Generic Makefile (compile/link/deploy), CMX (for publishing runtime process metrics) and Manifest (capturing library dependencies). SIP4C/C++ has influenced our culture by promoting integration of said products into our binaries and workflows. We describe our current status for technical solutions and how they have been integrated into our environment. Based on testimony from four project teams, we present reasons for and against adoption of individual SIP4C/C++ products and processes. Finally, we show how SIP4C/C++ has improved development and delivery processes as well as the first-line support of delivered products.*http://jacow.org/ICALEPCS2013/papers/moppc087.pdf, http://jacow.org/ICALEPCS2013/posters/moppc087_poster.pdf

NICA (Nuclotron-based Ion Collider Facility) is a new accelerator complex designed at the Joint Institute for Nuclear Research (Dubna, Russia) to study properties of dense baryonic matter. The report describes Tango-modules designed at JINR to provide web-access to Tango-based control system. RestDS is a lightweight Tango REST service, developed in C++ with Boost and OpenSSL libraries. It implements Tango REST API and Tango JINR REST API; WebSocketDS is a lightweight Tango WebSocket service, developed in C++ with WebSocket++, Boost and OpenSSL libraries. It implements Tango attributes reading and command executing through WebSockets. The report also gives examples of web client applications for NICA control system, using these services.

NICA (Nuclotron-based Ion Collider fAcility) is a new accelerator complex being constructed at the Joint Institute for Nuclear Research (Dubna, Russia). It will provide heavy ion colliding experiments to study properties of dense baryonic matter. The TANGO based control system of the NICA complex is under development now. The report describes design of the role-based authorization and logging system. It allows limiting access to any Tango device command or attribute according to a user roles and location. The system also restricts access to the Tango database and records details of its modifications. The authorization is performed on the Tango server side thus complementing the native TANGO client-side access control. First tests of the system were performed during the latest Nuclotron run.

Polish National Center for Synchrotron Radiation SOLARIS UJ is being prepared for first users. In order to facilitate process of user management, proposal submission, review and beam time allocation the SOLARIS Digital User Office project has been started. The DUO is developed in collaboration with Academic Computer Center CYFRONET AGH. The DUO consists of several main components. The user management component allows user registration and user affiliation management. The proposal submission component facilitate filling proposal form, indicating co-proposers and experimentalist. The review component supports process of decision making, including the Review Meeting event and grading proposals process. Apart of managing the main processes, the application provides an additional functionalities (e.g. experimental reports, trainings, feedbacks). DUO was designed as an open platform to face the challenges related to continually changing Solaris facility. Therefore, the business logic is described as an easily maintainable rule-based specification. To achieve good user experience modern web technologies were used including: Angular for the front-end part and Java Spring for server.

The quoted machine availability of a particle accelerator over some time range is usually hand-generated by a machine coordinator, who pores over archived operations parameters and logbook entries for the time period in question. When the machine is deemed unavailable for operations, 'blame' is typically assigned to one or more machine sub-systems. With a 'perfect' representation of all possible machine states and all possible fatal alarms it is possible to calculate machine availability and assign blame automatically and thereby remove any bias and uncertainty that might creep in when a human is involved. Any system which attempts to do this must nevertheless recognize the de-facto impossibility of achieving perfection and allow for 'corrections' by a machine coordinator. Such a system for automated availability statistics was recently presented* and we now report on results and improvements following a half year in operation at PETRA-3 and its accelerator chain.* Duval, Lomperski, Ehrlichmann, and Bobar, "Automated Availability Statistics", Proceedings PCaPAC 2016.

A new generation of superconducting magnets is being developped, in the framework of the HL-LHC upgrade project. Several laboratories in Europe, USA, Japan and Russia collaborate on this project. One of the tasks assigned to CERN is to conduct the optimization tests and later the series tests, for the MQXFS and MQXF-A/B magnets. A new dedicated test bench has been built at the CERN superconducting magnet test facility (SM18), where these magnets will be evaluated under their operational conditions in the LHC tunnel. To fulfill the test conditions on these high performance magnets, a new high frequency data acquisition system (DAQ) has been designed, associated to a new software used to control two 15 kA power converters. This article presents all the technical aspects of these two major components of the test platform, from the PXIe hardware selection of the DAQ system to the operational applications deployment. The commissioning phase and results of the first measurement campaign are also reported.

The Large Hadron Collider (LHC) is equipped with a complex collimation system to protect sensitive equipment from unavoidable beam losses. Collimators are positioned close to the beam using an alignment procedure. Until now they have always been aligned assuming no tilt between the collimator and the beam, however, tank misalignments or beam envelope angles at large-divergence locations could introduce a tilt limiting the collimation performance. This paper describes three different algorithms to automatically align a chosen collimator at various angles. The implementation was tested with and without beam at the SPS and the LHC. No human intervention was required and the three algorithms converged to the same optimal tilt angle.

The linear, super-conducting accelerator at the new European XFEL facility will be able to produce up to 2700 electron bunches for each shot at a repetition rate of 10 Hz. The bunch repetition rate might vary initially between 100 kHz and 4.5 MHz to accommodate the various needs of experiments at three different SASE beam lines. A solution, which is able to provide bunch-resolved data of multiple data sources together in one place for each shot, has been implemented at the E-XFEL as an integral part of the accelerator control system. This will serve as a framework for high-level control applications, including online monitoring and slow feedback services. A similar system has been successfully run at the FLASH facility at DESY for more than a decade now. This paper presents design, implementation and first experiences from commissioning the XFEL control system data acquisition.

This contribution reviews the novel LHC luminosity control software stack. All luminosity-related manipulations and scans in the LHC interaction points are managed by the LHC luminosity server, which enforces concurrency correctness and transactionality. Operational features include luminosity optimization scans to find the head-on position, luminosity levelling, and the execution of arbitrary scan patterns defined by the LHC experiments in a domain specific language. The LHC luminosity server also provides full built-in simulation capabilities for testing and development without affecting the real hardware. The performance of the software in 2016 and 2017 LHC operation is discussed and plans for further upgrades are presented.

At CERN, the LHC (Large Hadron Collider) cryogenic system employs about 4900 PID (Proportional Integral Derivative) regulation loops distributed over the 27 km of the accelerator. Tuning all these regulation loops is a complex task and the systematic monitoring of them should be done in an automated way to be sure that the overall plant performance is improved by identifying the poorest performing PID controllers. It is nearly impossible to check the performance of a regulation loop with a classical threshold technique as the controlled variables could evolve in large operation ranges and the amount of data cannot be manually checked daily. This paper presents the adaptation and the application of an existing regulation indicator performance algorithm on the LHC cryogenic system and the different results obtained in the past year of operation. This technique is generic for any PID feedback control loop, it does not use any process model and needs only a few tuning parameters. The publication also describes the data analytics architecture and the different tools deployed on the CERN control infrastructure to implement the indicator performance algorithm.

FEL tuning and optimization within the OCELOT framework has been implemented in 2015 and has been since used for SASE pulse energy optimization at FLASH and later at LCLS, as well as injection efficiency maximization in the Siberia-1 storage ring. For the European XFEL commissioning purposes the code was considerably improved and additional set of tools has been introduced. Here these tools and experi-ence of their use during the European XFEL commissioning and initial operation will be presented. Future devel-opment directions will be outlined.

The operational log system is one of the electric log systems for recording and viewing the accelerator operation time and contents of an operated device. Zlog (Zope-based log system)* developed by KEK was utilized for the RIBF control system. Zope is an open-source Web server and Web application framework written in Python. Using the Web application, information on accelerator operation is designated by a character string on Web browsers. However, the displayed string character on the Web browser will be complex for accelerator operators because many parameters are changed in accelerator operation, though the Web-based system has many advantages. For smoother accelerator operation, an ergonomically designed operational log system is required. Therefore, we developed a new operational log system for RIBF control system. The new system is possible to provide operational logs with a variety of rich GUI components. As of now, the operational log system has been working for accelerator operation by monitoring approximately 3,000 points as the EPICS record without any serious problem.*K. Yoshii et al.: Proc. ICALEPCS07, (2007), p. 299.

Funding:INAFUser-Centered Design is a powerful approach for designing UIs that match and satisfy users' skills and expectations. Interviews, affinity diagrams, personas, usage scenarios are some of the fundamental tools for gathering and analysing relevant information. We applied these techniques to the development of the UI for the control room of the Square Kilometre Array (SKA) telescopes. We interviewed the personnel at two of the SKA precursors, LOFAR and MeerKAT, with the goal of understanding what features satisfy operators' needs and which ones can be improved. What was learned includes several usability issues dealing with fragmentation and low cohesiveness of the UIs, some gaps, and an excessive number of user actions needed to achieve certain goals. Low usability of the UI and the large scale of SKA are two challenges in developing its UI because they affect the extent to which operators can focus on important data, the likelihood of human errors and their consequences. This paper illustrates the followed method, provides examples of some of the artefacts that were produced and describes and motivates the resulting usability recommendations which are specific for SKA.

ESO is in the process of designing a new instrument control application framework for the ELT project. During this process, we have used the experience in HW control gained from the first and second generation of VLT instruments that have been in operation for almost 20 years. The preliminary outcome of this analysis is a library of Statecharts models illustrating the behaviour of some of the most commonly used devices in telescope and instrument control systems. This paper describes the architectural aspects taken into consideration when designing the models such as HW/SW state representation, common/specialized behaviour, and failure management. An extension to Harel's formalism to facilitate reusability by dynamic creation of orthogonal regions is also proposed. The paper details the behaviour of some devices like shutters, lamps and motors together with the rationale behind the modelling choices. A mapping of the models to a concrete implementation using real HW components is suggested. Although these models have been designed following the principles of our conceptual architecture, they are still generic and platform independent, so they can be easily reused in other projects.

Security policies are becoming hard to apply as instruments are smarter than ever. Every oscilloscope gets its own stick with a Windows tag, everybody would like to control his huge installation through the air, IOT is on every lips' Stuxnet, the recent Ed. Snowden revelations have shown that cyber threat on SCADAs cannot be only played in James Bond movies. This paper aims to give simple advises in order to protect and make our installations more and more secure. How to write security files? What are the main precautions we have to take care of? Where are the vulnerabilities of my installation? Cyber security is everyone's matter, not only the cyber staff's!

Funding:European Space AgencyThe European Ground System Common Core (EGS-CC) initiative is now materializing. The goal of the this initiative is to define, build and share a software framework and implementation that will be used as the main basis for pre- and post- launch ground systems (Electrical Ground Support Equipment and Mission Control System) of future European space projects. The initiative is in place since year 2011 and is being led by the European Space Agency as a formal collaboration of the main European stakeholders in the space systems control domain, including European Space National Agencies and European Prime Industry. The main expected output of the EGS-CC initiative is a core system which can be adapted and extended to support the execution of pre- and post-launch Monitoring and Control operations for all types of missions and throughout the complete life-cycle of space projects. This presentation will introduce the main highlights of the EGS-CC initiative, its governance principles, the fundamental concepts of the resulting products and the challenges that the team is facing.

Funding:U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344The National Ignition Facility (NIF) is the world's largest and most energetic laser experimental facility with 192 beams capable of delivering 1.8 megajoules and 500-terawatts of ultraviolet light to a target. Officially commissioned as an operational facility on March 21, 2009, NIF is expected to conduct research experiments thru 2039. The 30-year lifespan of the control system presents several challenges in meeting reliability, availability, and maintainability (RAM) expectations. As NIF continues to expand on its experimental capabilities, the control system's software base of 3.5 million lines of code grows with most of the legacy software still in operational use. Supporting this software is further complicated by technology life cycles and turnover of senior experienced staff. This talk will present lessons learned and new initiatives related to technology refreshes, risk mitigation, and changes to our software development and test methodology to ensure high control system availability for supporting experiments throughout NIF's lifetime.LLNL-ABS-727374

Safety is likely the most critical concern in many process industries, yet there is a general uncertainty on the proper engineering to reduce the risks and ensure the safety of persons or material at the same time of providing the process control system. Some of the reasons for this misperception are unclear requirements, lack of functional safety engineering knowledge or incorrect protection functionalities attributed to the BPCS (Basic Process Control System). Occasionally the control engineers are not aware of the hazards inherent to an industrial process and this causes the lack of the right design of the overall controls. This paper illustrates the engineering of the SIS (Safety Instrumented System) and the BPCS of the plasma vapour controls of the AWAKE R&D project, the first proton-driven plasma wakefield acceleration experiment in the world. The controls design and implementation refers to the IEC61511/ISA84 standard, including technological choices, design, operation and maintenance. Finally, the publication reveals usual difficulties appearing in such kind of industrial installations and the actions to be done to ensure the proper functional safety system design.

Providing and assuring safe conditions for personnel is a key parameter required to operate the European Spallation Source (ESS). The main purpose of the Personnel Safety Systems (PSS) at ESS is to protect workers from the facility's ionising prompt radiation hazards, but also identify as well as mitigate against other hazards such as high voltage or oxygen depletion. PSS consist of three systems: the Safety interlock system, the Access control system and the Oxygen deficiency hazard (ODH) detection system. The Safety interlock system ensures the safety functions of the PSS by controlling all hazardous equipment for starting the beam operation and powering the RF-powered units and allowing its operation when personnel is safe. This paper will describe the ESS PSS Accelerator Safety interlock system's scope, strategy, methodology and current status.

Funding:U.S. Department of Energy's National Nuclear Security Administration, DE-NA0003525The Z Machine is the world's largest pulsed power machine, routinely delivering over 20 MA of electrical current to targets in support of US nuclear stockpile stewardship and in pursuit of inertial confinement fusion. The large-scale, multi-disciplinary nature of experiments ('shots') on the Z Machine requires resources and expertise from disparate organizations with independent functions and management, forming a Collaborative System-of-Systems. This structure, combined with the Emergent Knowledge Processes central to preparation and execution, creates significant challenges in planning and coordinating required activities leading up to a given experiment. The present work demonstrates an approach to scheduling planned activities on shot day to aid in coordinating workers among these different groups, using minimal information about activities' temporal relationships to form a Simple Temporal Network (STN). Historical data is mined, allowing a standard STN to be created for common activities, with the lower bounds between those activities defined. Activities are then scheduled at their earliest possible times to provide participants a time to check-in when interested.maschaf@sandia.gov

The European XFEL is in its commissioning phase at this time. One of the major tasks is to bring up all the 25 installed RF-stations, which will allow for beam energy of up to 17.5GeV. It is expected, that a klystron may fail every 1-2 month. The accelerator is designed at the moment with an energy overhead corresponding to 2-3 RF-station, as the last 4 accelerating modules will be installed in a later stage. This will allow recovering the missing energy with the other functioning RF-stations to keep downtime as short as possible in the order of seconds. The concept and corresponding High-Level software accomplishing this task will be presented in this paper.

Since operation-startup more than 10 years ago, Synchrotron SOLEIL has chosen acquisition architectures that are mainly based on CompactPCI systems. The last few years there has however been an acceleration of obsolescence issues on the CPCI products and it has also been identified that this technology would become a bottleneck in terms of performance for new projects. The MACUP project was therefore created with two main objectives: maintaining the current facility operations by addressing the hardware obsolescence risks, all while searching for alternate high-performance solutions with better embedded processing capabilities to face new challenging requirements. One additional guideline for the project is to facilitate collaborative work for accelerator and beamline projects by evaluating and standardizing a limited set of technologies like the Xilinx ZYNQ SOC, VITA 57 FMC and μTCA standards. This paper describes the adopted methodologies and roadmap to drive this project.

Industrial power supplies deliver high and low voltage to a wide range of CERN's detector and accelerator components. These power supplies, sourced from external companies, are integrated into control systems via industry standard OPC servers. The servers are now being modernized. A key lesson learnt from running the previous generation of OPC servers is that vendor specific, black-box implementations can be costly in terms of support effort, particularly in diagnosing problems in large production-site deployments. This paper presents the projects producing the next generation of OPC servers; following an open, collaborative approach and a high degree of homogenization across the independent partners. The goal is to streamline development and support costs via code re-use and a template architecture. The collaborations aim to optimally combine CERN's OPC and production operations knowledge with each company's experience in integrating their hardware. This paper describes the considerations and constraints taken into account, including legal aspects, product commercialization and technical requirements to define a common collaborative approach across three hardware manufacturers.

This article introduce a project for upgrading the control system of the main booster power supplies of ALBA synchrotron. A brief description of the booster power supplies and the motivation for this upgrade is given. The several options for the upgrade that are being evaluated are discussed. Different possible architectures are also presented. Finally, conclusions about how to face this kind of project are given.

The cryogenic system is one of the most critical component of the CERN Large Hadron Collider (LHC) and its associated experiments ATLAS and CMS. In the past years, the cryogenic team has improved the maintenance plans, the operation procedures and achieved a very high reliability. However, as the recovery time after failure remains the major issue for the cryogenic availability new developments must take place. A new online diagnostic tool is developed to identify and anticipate failures of cryogenics field equipment, based on the acquired knowledge on dynamic simulation for the cryogenic equipment and on previous data analytic studies. After having identified the most critical components, we will develop their associated models together with the signature of their failure modes. The proposed tools will detect deviation between the actual systems and their model or identify preliminary failure signatures. This information will allow the operation team to take early mitigating actions before the failure occurrence.

In recent neutron scattering experiments, a large quantity and various kinds of experimental data are generated. In J-PARC MLF, it is possible to conduct many experiments under various conditions in a short time with high-intensity neutron beam and high-performance neutron instruments with a wealth of sample environmental equipment. Therefore, it is required to make an efficient and effective data analysis. Additionally, since it has been almost nine years from the beginning of operation in MLF, there are many equipment and system being up for renewal resulting in failure due to aging degradation. Since such kind of failure can lose precious beam time, failure or its sign should be early detected. MLF status analysis system based on the Elasticsearch, Logstash and Kibana (ELK) Stack, which is one of the web-based framework rapidly growing for big data analysis, ingests various data from neutron instruments in real time. It realizes to gain insight for decision-making such as data analysis and experiment as well as instrument maintenance by flexible user-based analysis and visualization. In this paper, we will report the overview and development status of our status analysis system.

Since the introduction of the map-reduce paradigm, relational databases are being increasingly replaced by more efficient and scalable architectures, in particular in environments where a query will process TBytes or even PBytes of data in a single execution. The same tendency is observed at CERN, where data archiving systems for operational accelerator data are already working well beyond their initially provisioned capacity. Most of the modern data analysis frameworks are not optimized for heterogeneous workloads such as they arise in the dynamic environment of one of the world's largest accelerator complex. This contribution presents a Mixed Partitioning Scheme Replication (MPSR) as a solution that will outperform conventional distributed processing environment configurations for almost the entire phase-space of data analysis use cases and performance optimization challenges as they arise during the commissioning and operational phases of an accelerator. We will present results of a statistical analysis as well as the benchmarking of the implemented prototype, which allow defining the characteristics of the proposed approach and to confirm the expected performance gains.

The network systems for J-PARC accelerators have been operated over ten years. This report gives: a) an overview of the control network system, b) discussion on relationship between the control network and the office network, and c) recent security issues (policy for antivirus) for terminals and servers. Operation experiences, including troubles, are also presented.

In 2017 the Injection Complex at Budker Institute, Novosibirsk, Russia began to operate for its consumers - colliders VEPP-4 and VEPP-2000. For successful functioning of these installations is very important to ensure a stable operation of their control systems and IT-infrastructure. The given article is about new IT-infrastructures of three accelerators: Injection Complex, VEPP-2000 and VEPP-4. IT-infrastructure for accelerators consists of servers, network equipment and system software with 10-20 years life-cycle and timely support. The reasons to create IT-infrastructure with the same principles are costs minimization and simplification of support. The following points that underlie during designing are high availability, flexibility and low cost. First is achieved through redundancy of hardware - doubling of servers, disks and network interconnections. Flexibility is caused by extensive use of virtualization that allows easy migration from one hardware to another in case of fault and gives users an ability to use custom system environment. Low cost - from equipment unification and minimizing proprietary solutions

During the 2017 Year-end Technical Stop of the Large Hadron Collider at CERN, the CMS experiment has successfully installed a new pixel detector in the frame of Phase I upgrade. This new detector will operate using evaporative CO2 technology as its cooling system. Carbon Dioxide, as state of the art technology for current and future tracking detectors, allows for significant material budget saving that is critical for the tracking performance. The road towards operation of the final CO2 cooling system in the experiment passed through intensive prototype phase at the CMS Tracker Integration Facility (TIF) for both cooling process hardware and its control system. This paper briefly describes the general design of both the CMS and TIF CO2 detector cooling systems, and focuses on control system architecture, operation and safety philosophy, commissioning results and operation experience. Additionally, experience in using the Ethernet IP industrial fieldbus as distributed IO is presented. Various pros and cons of using this technology are discussed, based on the solutions developed for Schneider Premium PLCs, WAGO and FESTO IOs using the UNICOS CPC 6 framework of CERN.

Funding:The Key Fund for Outstanding Youth Talent of Anhui Educational Commission of China(NO. 2013SQRL099ZD)China Fusion Engineering Test Reactor (CFETR) is superconducting Tokamak device which is next-generation engineering reactor between ITER and DEMO. It is now being designed by China national integration design group. In the present design, its magnet system consists of 16 Toroidal Field (TF) coils, 6 Center Solenoid (CS) coils and 8 Poloidal Field (PF) coils. A helium refrigerator with an equivalent cooling capacity of 5kW at 4.5K for CFETR TF coil test facility is proposed. It can provide 3.7K & 4.5K supercritical helium for TF coil, 50K cold helium with a 10g/s flow rate for High Temperature superconducting (HTS) current leads and 50K cold helium with a cooling capacity of 1.5kW for thermal shield. This paper presents the conceptual design of cryogenic control system for CFETR TF coil test including of architecture, hardware design and software development.

Intelligent robotic systems are becoming essential for inspection and measurements in harsh environments, such as the European Organization for Nuclear Research (CERN) accelerators complex. Aiming at increasing safety and machine availability, robots can help to perform repetitive or dangerous tasks, reducing the risk for the personnel as the exposure to radiation. The Large Hadron Collider (LHC) tunnel at CERN has been equipped with fail-safe trains on monorail able to perform autonomously different missions as radiation survey, civil infrastructures monitoring through photogrammetry, fire detection as well as survey measurements of accelerator devices. In this paper, the entire control architecture and the design of the lowlevel control to fulfil the requirements and the challenges of the LHC tunnel are described. The train low-level control is based on a PLC controller that communicates with the surface via 4G through VPN, where a user-friendly graphical user interface allows the operation of the robot. The low-level controller includes a PLC fail-safe program to ensure the safety of the system. The results of the commissioning in the LHC are presented.

The accelerator facilities at CERN span large areas and the personnel protection systems consist of hundreds of interlocked doors delimiting the accelerator zones. Entrance into the interlocked zones from the outside is allowed only via a small number of access points. These are no longer made of doors which have left their place to turnstiles and then to mantraps or Personnel Access Devices (PAD). Originally meant for high security zones, the commercially available PADs have a number of CERN-specific additions. This paper presents in detail the purpose and characteristics of each piece of equipment constituting the access devices and its integration within the personnel protection system. Key concepts related to personnel safety (e.g. interlocked safety tokens, patrols) and to access control (e.g. access authorisation, biometric identity verification, equipment checks) are introduced and solutions discussed. Three generations of access devices are presented, starting from the LHC model put in service in 2008, continuing with the PS devices operational since 2014 and finally introducing the latest model under development for the refurbishment of the SPS Personnel Protection System.

Access to the interlocked zones of the CERN accelerator complex is allowed only for personnel wearing standard personal protective equipment. This equipment is complemented by specialised personal protective devices in case of specific hazards related to the remnant radiation or the presence of cryogenic fluids. These complex devices monitor the environment in the vicinity of the user and warn the user of the presence of hazards such as radiation or oxygen deficiency. The use of the devices is obligatory, but currently only enforced by procedures. In order to improve the safety of the personnel it has been proposed to verify that users are carrying their devices switched on when entering. This paper describes the development of a specialised multi-protocol terminal, based on Texas Instruments digital signal processor and integrated in the personnel protection system. The device performs local checks of the presence and status of operational dosimeter prior to allowing access to the interlocked zones. The results of the first tests in the Proton Synchrotron accelerator complex will be presented.

Diamond Light Source is celebrating 10 years of "users" at its facility in Oxfordshire, England. Its safety systems have been designed to the standard EN61508, with the facility constructed in 3 phases, which are just concluding. The final "phase 3" beamline Personnel Safety System has been signed-off; hence it is timely to review our experience of the journey with these systems.

PIP-II is a high intensity proton linac being design to support a world-leading physics program at Fermilab. Initially it will provide high intensity beams for Fermilab's neutrino program with a future extension to other applications requiring an upgrade to CW linac operation (e.g. muon experiments). The machine is conceived to be 2 mA CW, 800 MeV H− linac capable of working initially in a pulse (0.55 ms, 20 Hz) mode for injection into the existing Booster. The planned upgrade to CW operation implies that the total beam current and damage potential will be greater than in any present HEP hadron linac. To mitigate the primary technical risk and challenges associated PIP-II an integrated system test for the PIP-II front-end technology is being developed. As part of the R&D a robust machine protection system (MPS) is being designed. This paper describes the progress and challenges associated with the MPS.

We describe the development of firmware to support Longitudinal Bunch by Bunch Feedback at Diamond Light source. As well as feedback, the system supports complex experiments and the capture of detailed electron beam diagnostics. In this paper we describe the firmware development and some details of the processing chain. We focus on some of the challenges of FPGA development from the perspective of a software engineer.

Funding:Work supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515.The hard x-ray split and delay (HXRSnD) system at the Linear Coherent Light Source (LCLS) was designed to allow for experiments requiring two-pulse based x-ray photon correlation spectroscopy. The system consists of eight silicon crystals split between two optical branches, with over 30 degrees of freedom. To maintain system stability and safety while easing system operation, we expand the LCLS Skywalker software suite to provide a python-based automation scheme that handles alignment, operations and engineer notification. Core safety systems such as collision avoidance are processed at the controller and Experimental Physics and Industrial Control System (EPICS) layer. Higher level functionality is implemented using a stack of open-source python packages (ophyd, bluesky, transitions) which provide a comprehensive and robust operational environment consisting of virtual motors, plans and finite state machines (FSM).

Today's front-end controllers, which are widely used in CERNs controls environment, feature CPUs with high clock frequencies and extensive memory storage. Their specifications are comparable to low-end servers, or even smartphones. The Java Virtual Machine (JVM) has been running on similar configurations for years now and it seems natural to evaluate the behaviour of JVMs on this environment to characterize if Firm or Soft real-time constraints can be addressed efficiently. Using Java at this low-level offers the opportunity to refactor CERNs current implementation of the device/property model and to move away from a monolithic architecture to a promising and scalable separation of the area of concerns, where the front-end may publish raw data that other layers would decode and re-publish. This paper presents first the evaluation of Machine Protection control system requirements in terms of real-time constraints and a comparison of the performance of different JVMs regarding these constraints. In a second part, it will detail the efforts towards a first prototype of a minimal RT Java supervision layer to provide access to the hardware layer.

The superconducting linear electron accelerator ELBE at Helmholtz-Zentrum Dresden-Rossendorf is a versatile light source. It operates in continuous wave (CW) mode to provide a high average beam current. To fulfil the requirements for future high resolution experiments the analogue low level radio frequency control (LLRF) is currently replaced by a digital μTCA.4 based LLRF developed at DESY, Hamburg. Operation and parametrization is realized by a server application implemented by DESY using the ChimeraTK software framework. To interface the WinCC 7.3 based ELBE control system an OPC UA Adapter for ChimeraTK has been developed in cooperation with DESY and Technische Universität Dresden (TUD). The poster gives an overview of the collaborating parties, the variable mapping scheme used to represent LLRF data in the OPC UA server address space and integration experiences with different industrial OPC UA Clients like WinCC 7.3 and LabVIEW.

Accelerator control software often has to handle multi-dimensional data of physical quantities when aggregating readings from multiple devices (e.g. the reading of an orbit in the LHC). When storing such data as nested hashtables or lists, the ability to do structural operations or calculations along an arbitrary dimensions is hampered. Tensorics is a Java library to provide a solution for these problems. A Tensor is a n-dimensional data structure, and both structural (e.g. extraction) and mathematical operations are possible along any dimension. Any Java class or interface can serve as a dimension, with coordinates being instances of a dimension class. This contribution will elaborate on the design and the functionality of the Tensorics library and highlight existing use cases in operational LHC control software, e.g. the LHC luminosity server or the LHC chromaticity correction application.

nTOF is a pulsed neutron facility at CERN which studies neutron interactions as function of the energy. Neutrons are produced by a pulsed proton beam from the PS directed to a lead target. In a typical experiment, a sample is placed in the neutron beam and the reaction products are recorded. The typical output signals from the nTOF detectors are characterized by a train of pulses, each one corresponding to a different neutron energy interacting with the sample. The Data Acquisition System (DAQ) has been upgraded in 2014 and is characterized by challenging requirements as more than hundreds of 12 or 14-bit channels at a sampling frequency of 1 GS/s and 1.8 GS/s acquired simultaneously every 1.2 s for up to 100 ms. The amount of data to be managed can reach a peak of several GB/s. This paper describes the hardware's solutions as well as the software's architecture developed to ensure the proper synchronization between all the DAQ machines, the data's integrity, retrieval and analysis. The software modules and tools developed for the monitoring and control of the nTOF experimental areas and the DAQ operation are also detailed.

Due to the massive parallel operation modes at the GSI accelerators, a lot of accelerator setup and re-adjustment has to be made during a beam time. This is typically done manually and is very time-consuming. With the FAIR project the complexity of the facility increases furthermore and for efficiency reasons it is recommended to establish a high level of automation. Modern Accelerator Control Systems allow a fast access to both, accelerator settings and beam diagnostics data. This provides the opportunity together with the fast-switching magnets in GSI-beamlines to implement evolutionary algorithms for automated adjustment. A lightweight python interface to CERN Front-End Software Architecture (FESA) gave the opportunity to try this novel idea, fast and easy at the CRYRING@ESR injector. Furthermore, the python interface facilitates the work flow significantly as the evolutionary algorithms python package DEAP could be used. DEAP has been applied already in external optimization studies with particle tracking codes*. The first results and gained experience of an automatized optimization at the CRYRING@ESR injector are presented here.* S. Appel, O. Boine-Frankenheim, F. Petrov, Injection optimization in a heavy-ion synchrotron using genetic algorithms, Nucl. Instrum. Methods A, 852 (2017) pp. 73-79.

Magnet measurements at the Paul Scherrer Institute (PSI) are performed with the use of a process control tool (PCT), which is fully integrated into the PSI control system. The tool is implemented as a set of user friendly graphical user interface applications dealing with particular magnet measurement techniques supported at PSI, which include Hall probe, vibrating wire, and moving wire methods. The core of each application is the state machine software developed by magnet measurement and control system experts. Applications act as very efficient assistants to the magnet measurement personnel by monitoring the whole measurement process on-line and helping to react in a timely manner to any possible operational errors. The paper concentrates on the PCT structure and its performance.

The Square Kilometre Array (SKA) is a global project to build a multi-purpose radio telescope that will play a major role in answering key questions in modern astrophysics and cosmology. It will be one of a small number of cornerstone observatories around the world that will provide astrophysicists and cosmologists with a transformational view of the Universe. Two major goals of the SKA is to study the history and role of neutral Hydrogen in the Universe from the dark ages to the present-day, and to employ pulsars as probes of fundamental physics. Since 2008, the global radio astronomy community has been engaged in the development of the SKA and is now nearing the end of the 'Pre-Construction' phase. This talk will give an overview of the current status of the SKA and the plans for construction, focusing on the computing and software aspects of the project.

At the SPring-8 site, the X-ray free electron laser facility, SACLA, and the third generation light source, SPring-8 storage ring, is operated. The SACLA generate brilliant coherent X-ray beams with wavelength of below 0.1nm and the SPring-8 provides brilliant X-ray to large number of experimental users. On the SPring-8 upgrade project we have a plan to use the linac of SACLA for a full-energy injector. For this purpose, two accelerators should be controlled seamlessly and the SACLA has to operate as to generate X-ray laser and injector for the SPring-8 simultaneously. We start the design of control system to meet those requirements. We redesign all of a control framework such as Database, Messaging System and Equipment Control include with NoSQL database, MQTT and EtherCAT. In this paper, we will report the design of control system for SACLA/SPring-8 together with status of the SPring-8 upgrade project.

The Laser MegaJoule (LMJ) is a 176-beam laser facility, located at the CEA CESTA Laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy to targets, for high energy density physics experiments, including fusion experiments. The first 8-beams bundle was operated in October 2014 and a new bundle was commissioned in October 2016. The next two bundles are on their way. There are three steps for the validation of a new bundle and its integration to the existing control system. The first step is to verify the ability of every command control subsystems to drive the new bundle using a secondary independent supervisory. It is performed from a dedicated integration control room. The second is to switch the bundle to the main operations control room supervisory. At this stage, we perform the global system tests to validate the commissioning of the new bundle. In this paper we focus on the switch of a new bundle from the integration control room to the main operations control room. We have to connect all equipment controllers of the bundle to the operations network and update the Facility Configuration Management.