The ambitious upgrade plan of the ALICE experiment expects a complete redesign of its data flow after the LHC shutdown scheduled for 2019, for which new electronics modules are being developed in the collaborating institutes. Access to prototypes is at present very limited and full scale prototypes are expected only close to the installation date. To overcome the lack of realistic HW, the ALICE DCS team built small-scale prototypes based on low-cost commercial components (Arduino, Raspberry PI), equipped with environmental sensors, and installed in the experiment areas around and inside the ALICE detector. Communication and control software was developed, based on the architecture proposed for the future detectors, including CERN JCOP FW and ETM WINCC OA. Data provided by the prototypes has been recorded for several months, in presence of beam and magnetic field. The challenge of the harsh environment revealed some insurmountable weaknesses, thus excluding this class of devices from usage in a production setup. They did prove, however, to be robust enough for test purposes, and are still a realistic test-bed for developers while the production of final electronics is continuing.

Funding:This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.The NIF Shot Data Systems (SDS) team developed the Target Request Tool (TRT) Web application for facilitating the management of target requests from creation to approval. TRT provides a simple-to-use and user-friendly interface that allows the user to create, edit, submit and withdraw requests. The underlying design uses the latest Web technologies such as Node.js, Express, jQuery and Java-Script. The overall software architecture and functionality will be presented in this paper.LLNL-ABS-728266

The Scientific Data Management is a key aspect of the IT system of a user research facility like the MAX~IV Laboratory. By definition, this system handles data produced by the experimental user of such a facility. It could be perceived as easy as using an external hard drive to store the experimental data to carry back to the home institute for analysis. But on the other hand the "data" can be seen as more than just a file in a directory and the "management" not only a copy operation. Simplicity and a good User Experience vs security/authentication and reliability are among the main challenges of this project along with all the mindset changes. This article will explain all the concepts and the basic roll-out of the system at the MAX~IV Laboratory for the first users and the features anticipated in the future.

We describe the data analysis structure that is integrated into the Karabo framework [1] to support scientific experiments and data analysis at European XFEL GmbH. The photon science experiments have a range of data analysis requirements, including online (i.e. near real-time during the actual measurement) and offline data analysis. The Karabo data analysis framework supports execution of automatic data analysis for routine tasks, supports complex experiment protocols including data analysis feedback integration to instrument control, and supports integration of external applications. The online data analysis is carried out using distributed and accelerator hardware (such as GPUs) where required to balance load and achieve near real-time data analysis throughput. Analysis routines provided by Karabo are implemented in C++ and Python, and make use of established scientific libraries. The XFEL control and analysis software team collaborates with users to integrate experiment specific analysis codes, protocols and requirements into this framework, and to make it available for the experiments and subsequent offline data analysis.[1] Heisen et al (2013) "Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks". Proc. of 14th ICALEPCS 2013, Melbourne, Australia (p. FRCOAAB02)

Funding:This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.For the last 10 years, the National Ignition Facility (NIF) has provided scientists with an application, the Campaign Management Tool (CMT), to define the parameters needed to achieve their experimental goals. Conceived to support the commissioning of the NIF, CMT allows users to define over 18,000 settings. As NIF has transitioned to an operational facility, the low-level focus of CMT is no longer required by most users and makes setting up experiments unnecessarily complicated. At the same time, requirements have evolved as operations has identified new functionality required to achieve higher shot execution rates. Technology has also changed since CMT was developed, with the availability of the internet and web-based tools being two of the biggest changes. To address these requirements while adding new laser and diagnostic capabilities, NIF has begun to replace CMT with the Shot Setup Tool (SST). This poses challenges in terms of software development and deployment as the introduction of the new tool must be done with minimal interruption to ongoing operations. The development process, transition strategies and technologies chosen to migrate from CMT to SST will be presented.LLNL-ABS-728212

The BioMAX beamline at MAX IV is devoted to macromolecular crystallography and will achieve a high level of experimental automation when its full potential is reached due to the usage of high end instrumentation and comprehensive software environment. The control system is based on Tango and Sardana for managing the main elements of the beamline. Data acquisition and experiment control is done through MXCuBE v3, which interfaces with the control layer. Currently, the most critical elements such as the detector and diffractometer are already integrated into the control system, whereas the integration of the sample changer has already started. BioMAX has received its first users, who successfully collected diffraction data and provided feedback on the general performance of the control system and its usability. The present work describes the main features of the control system and its operation, as well as the next instrument integration plans

The ALICE Detector Control System (DCS) provides its services to the experiment for 10 years. It ensures uninterrupted operation of the experiment and guarantees stable conditions for the data taking. The decision to extend the lifetime of the experiment requires the redesign of the DCS data flow. The interaction rates of the LHC in ALICE during the RUN3 period will increase by a factor of 100. The detector readout will be upgraded and it will provide 3.4TBytes/s of data, carried by 10 000 optical links to a first level processing farm consisting of 1 500 computer nodes and ~100 000 CPU cores. A compressed volume of 20GByte/s will be transferred to the computing GRID facilities. The detector conditions, consisting of about 100 000 parameters, acquired by the DCS need to be merged with the primary data stream and transmitted to the first level farm every 50ms. This requirement results in an increase of the DCS data publishing rate by a factor of 5000. The new system does not allow for any DCS downtime during the data taking, nor for data retrofitting. Redundancy, proactive monitoring, and improved quality checking must therefore complement the data flow redesign.

The J-PARC MR's Machine Protection System (MR-MPS) was introduced from the start of beam operation in 2008. Since then, MR-MPS has contributed to the improvement of safety including stable operation of the accelerator and the experiment facilities. The present MR-MPS needs to be reviewed from the aspects such as increase of connected equipment, addition of power supply building, flexible beam abort processing, module uniqueness, service life etc. In this paper, we show the performance of MR-MPS and show future consideration of upgrade.

Despite the large number of feedback loops running simultaneously at the FERMI Free Electron Laser (FEL), they are not sufficient to keep the optimal machine working point in the long term, in particular when the machine is tuned in such a way to be more sensitive to drifts of the critical parameters. In order to guarantee the best machine performance, a novel software application which minimizes the shot to shot correlation between these critical parameters and the FEL radiation has been implemented. This application, which keeps spatially and temporally aligned the seed laser and the electron beam, contrary to many algorithms that inject noise in the system to be optimized, run transparently during the experiment beam times. In this paper we will also present a newly developed method to calculate a beam 'quality factor' starting from the images provided by a photon spectrometer, which tries to mimic the evaluation of machine physicists, as well as the results obtained using two model-less algorithms to optimize the FEL performance through maximization of the quality factor.

As part of the UK's in-kind contribution to the European Spallation Source, ISIS is working alongside the ESS and other partners to develop a new data streaming system for managing and distributing neutron experiment data. The new data streaming system is based on the open-source distributed streaming platform Apache Kafka. A central requirement of the system is to be able to supply live experiment data for processing and visualisation in near real-time via the Mantid data analysis framework. There already exists a basic TCP socket-based data streaming system at ISIS, but it has limitations in terms of scalability, reliability and functionality. The intention is for the new Kafka-based system to replace the existing system at ISIS. This migration will not only provide enhanced functionality for ISIS but also an opportunity for developing and testing the system prior to use at the ESS.

Funding:Work supported by the German Bundesministerium für Bildung und Forschung, Land Berlin and grants of Helmholtz Association.The 1.7GeV light source BESSY II features about 50 beamlines overbooked by a factor of 2 on the average. Thus availability of high quality synchrotron radiation (SR) is a central asset. SR users at BESSY II can base their beam time expectations on numbers generated according to the common operation metrics*. Major failures of the facility are analyzed according to * and displayed in real time, analysis of minor detriments are provided regularly by off line tools. Many operational constituents are required for extraordinary availability figures: meaningful alarming and dissemination of notifications, complete logging of program, device, system and operator activities, post mortem analysis and data mining tools. Preventive and corrective actions are enabled by consequent root cause analysis based on accurate eLog entries, trouble ticketing and consistent failure classifications. This paper describes the tool sets, developments, their implementation status and some showcase results at BESSY II.* Common operation metrics for storage ring light sources, A. Luedeke, M. Bieler, R.H.A. Farias, S. Krecic, R. Mueller, M. Pont, and M. Takao, Phys. Rev. Accel. Beams 19, 082802

Funding:This work was supported by the Korean Ministry of Science ICT & Future Planning under the KSTAR project.In fusion experiment, real-time network is essential to control plasma real-time network used to transfer the diagnostic data from diagnostic device and command data from PCS(Plasma Control System). Among the data, transmitting image data from diagnostic system to other system in real-time is difficult than other type of data. Because, image has larger data size than other type of data. To transmit the images, it need to have high throughput and best-effort property. And To transmit the data in real-time manner, the network need to has low-latency. RTPS(Real Time Publish Subscribe) is reliable and has Quality of Service properties to enable best effort protocol. In this paper, eProsima Fast RTPS was used to implement RTPS based real-time network. Fast RTPS has low latency, high throughput and enable to best-effort and reliable publish and subscribe communication for real-time application via standard Ethernet network. This paper evaluates Fast RTPS about suitability to real-time image data transmission system. To evaluate performance of Fast RTPS base system, Publisher system publish image data and multi subscriber system subscribe image data.* giilkwon@nfri.re.kr, Control team, National Fusion Research Institute, Daejeon, South Korea

One yet unanswered questions in physics today concerns the action of gravity upon antimatter. The GBAR experiment proposes to measure the free fall acceleration of neutral antihydrogen atoms. Installation of the project at CERN (ELENA) began in late 2016. This research project is facing new challenges and needs flexibility with hardware and software. EPICS modularity and distributed architecture has been tested for control system and to provide flexibility for future installation improvement. This paper describes the development of the software and the set of software tools that are being used on the project.

The ATLAS experiment has recently commissioned a new hardware component of its first-level trigger: the topological processor (L1Topo). This innovative system, using state-of-the-art FPGA processors, selects events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Since the first-level trigger is a synchronous pipelined system, such requirements are applied within a latency of 200ns. We will present the first results from data recorded using the L1Topo trigger; these demonstrate a significantly improved background event rejection, thus allowing for a rate reduction without efficiency loss. This improvement has been shown for several physics processes leading to low-pT leptons, including H->tau tau and J/Psi->mu mu. In addition, we will discuss the use of an accurate L1Topo simulation as a powerful tool to validate and optimize the performance of this new trigger system. To reach the required accuracy, the simulation must take into account the limited precision that can be achieved with kinematic calculations implemented in firmware.

Synchrotron facilities provides short, regular and high frequency flashes of light. These pulses are used by the scientific community for time resolved experiments. To improve the time resolution, demands for always shorter X-ray pulses are growing. To achieve this goal, Synchrotron SOLEIL and MAX IV laboratory have developed special operating modes such as low-alpha and femtoslicing, as well as a single pass linear accelerator. For the most demanding experiments, the synchronization between short light pulses and pump-probe devices requires sub-picoseconds delay adjustment. The TimIQ system has been developed for that purpose. It is a joint development between Synchrotron Soleil and MAX IV Laboratory. It is aimed to be used on three beamlines at Soleil and one at MAX IV. Based on IQ modulation technics, it allows shifting a radio frequency clock by steps of #100 fs. This paper is a description of this system and of its performances.

In Budker Institute of Nuclear Physics is being developed linear induction accelerator with beam energy 20MeV (LIA-20) for radiography. Distinctive feature of this accelerator in protection scope is existence both machine, person protection and experiment protection system. Main goal of this additional system is timely experiment inhibit in event of some accelerator faults. This system based on uniform protection controllers in VME form-factor which connected to each other by optical fiber. By special lines protection controller fast receive information about various faults from accelerator parts like power supplies, magnets, vacuum pumps and etc. Moreover each pulse power supply (modulator) fast send its current state through special 8 channel interlock processing board, which is base for modulator controller. This system must processing over 4000 signals for decision in several microseconds for experiment inhibit or permit.interlocks VME LIA-20 protection

NSLS2 is standardized on Geo Brick LV Delta Tau 5A motor controller, suitable to drive majority of stepper and servo motors. Standardization allows less spare inventory and common skill set to maintain. However, some applications especially instruments in the space confined endstations require using small, or even miniature motors. The question that we address, what are the options in customizing the 5A unit for driving low current motors, and what are the limitations? In this paper, we present a quantitative comparison of drive currents and performance data collected with Delta Tau PeWin software and external test equipment for a variety of low current steppers and servomotors with and without encoders ranging from 45mA to 250mA. Delta Tau Geo Brick LV comes in different amplifier configurations: a combination of 5A, 1A, and 0.25A amplifiers. While all configurations are tested, research goal is focused on performance and limitations of 5A driver, avoiding using step and direction option with extra hardware. Performance of widely used Newport MFA-PP and MFA-CC also will be discussed.

I. Arredondo, J. Jugo
University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain

Nowadays modern accelerators are starting to use virtualization to implement their control systems. Following this idea, one of the possibilities is to use containers. Containers are highly scalable, easy to produce/reproduce, easy to share, resilient, elastic and low cost in terms of computational resources. All of those are characteristics that fit with the necessities of a well defined and versatile control system. In this paper, a control structure based on this paradigm is discussed. Firstly the technologies available for this task are briefly compared. Starting from containerizing tools and following with the container orchestration technologies. As a result Kubernetes and Docker are selected. Then, the basis of Kubernetes/Docker and how it fits into the control of an accelerator is stated. Following the control applications suitable to be containerized are analyzed. It includes electronic log systems, archiving engines, middleware servers,… Finally, a particular structure for an accelerator based on EPICS as middleware is sketched.

Funding:This project is partially funded by the European Union Framework Programme for Research and Innovation Horizon 2020, under grant agreement 676548.The European Spallation Source will produce more data than existing neutron facilities, due to higher accelerator power and to the fact that all data will be collected in event mode with no hardware veto. Detector data will be acquired and aggregated with metadata coming from sources such as sample environment, choppers and motion control. To aggregate data we will use Apache Kafka with FlatBuffers serialisation. A common schema repository defines the formats to be used by the data producers and consumers. The main consumers we are prototyping are a file writer for NeXus files and live reduction and visualisation via Mantid. A Jenkins-based setup using virtual machines is being used for integration tests, and physical servers are available in an integration laboratory alongside real hardware. We present the current status of the data acquisition pipeline and results from the testing and integration work going on at the ESS Data Management and Software Centre in collaboration with in-kind and BrightnESS partners.

At the ESRF the activity of several beamlines is based upon tomography X-ray imaging in various fields such as Paleontology, Medical Imaging and Materials Science. The instrument control and data processing systems are cloned on all the relevant beamlines, however the steps of the processing pipeline from the data acquisition to their full exploitation in premier quality publications are based upon a heterogeneous software scenario comprised of e.g. SPEC, Python, Octave, PyHST2 and MATLAB modules. The need has thus clearly appeared to logically sequence the operations performed by these different actors into user-friendly workflows. At the ESRF we selected a generic workflow tool, Orange, which was originally developed at the University of Ljubljana and designed for data mining in collaboration with the open source community. The graphical interface enables the easy inclusion/exclusion of functionalities represented by individual boxes. Each box can be managed by simple pieces of Python code generating graphical interfaces via the PyQT5 library and is defined by a set of inputs and outputs which can be linked together to produce consistent data processing workflows.

The acquisition of X-ray diffraction data from macromolecular crystals is a major activity at many synchrotrons and requires user interfaces that provide robust and easy-to-use control of the experimental setup. Building on the modular design of the MxCuBE beamline user interface, we have implemented a finite state machine model that allows to describe and monitor the interaction of the user with the beamline in a typical experiment. Using a finite state machine, the path of user interaction can be rationalized and error conditions and recovery procedures can be systematically dealt with.Gabadinho, J. et al. (2010). MxCuBE: a synchrotron beamline control environment customized for macromolecular crystallography experiments. J. Synchrotron Rad. 17, 700-707

Polish National Center for Synchrotron Radiation SOLARIS UJ is being prepared for first users. In order to facilitate process of user management, proposal submission, review and beam time allocation the SOLARIS Digital User Office project has been started. The DUO is developed in collaboration with Academic Computer Center CYFRONET AGH. The DUO consists of several main components. The user management component allows user registration and user affiliation management. The proposal submission component facilitate filling proposal form, indicating co-proposers and experimentalist. The review component supports process of decision making, including the Review Meeting event and grading proposals process. Apart of managing the main processes, the application provides an additional functionalities (e.g. experimental reports, trainings, feedbacks). DUO was designed as an open platform to face the challenges related to continually changing Solaris facility. Therefore, the business logic is described as an easily maintainable rule-based specification. To achieve good user experience modern web technologies were used including: Angular for the front-end part and Java Spring for server.

Funding:China Spallation Neutron Source and the science and technology project of Guangdong province under grand No. 2016B090918131'2017B090901007In this paper we introduced the design and implementation of the neutron instrument experiment control system in CSNS. The task of the control system is to complete the spectrometer experiment, and meanwhile provides experimental data for physical analysis. The control system of instrument in CSNS coordinate device control, data acquisition and analysis software, electronics, detector, sample environment and many other subsystems. This paper descibres the system architecture, timing system, device control and software of instrument control in CSNSCorresponding author: Jian ZHUANG, e-mail: zhuangj@ihep.ac.cn

The ISOLDE facility at CERN requires a wide variety of software applications to ensure maximum productivity. It will be further enforced by two new and innovative applications; Automatic Save After set uP (ASAP) and Fast Beam Investigation (FBI). ASAP saves crucial time for the engineers in charge (EIC) during the physics campaign. It automatizes and standardizes a repetitive process. In addition, for each new set up, the EIC is required to document the settings of all important elements before delivering beam to the users. FBI will be serving two different needs. First, it will be used as a beam traceability tool. The settings of every element of ISOLDE that could obstruct, stop or affect the beam will be tracked by the application. This will allow to understand better the presence of radioactive contaminants after each experiment at every possible point in the facility. The second functionality will allow real time monitoring of the machine status during a physics run. FBI will be the most efficient way to visualize the status of the machine and find the reason that prevents the beam from arriving to the experimental station.

Funding:Brazilian Synchrotron Light Laboratory (LNLS), Brazilian Center for Research in Energy and Materials (CNPEM), Zip Code 13083-970, Campinas, Sao Paulo, Brazil.Brazil is building Sirius, the new Brazilian synchrotron light source which will be the largest scientific infrastructure ever built in Brazil and one of the world's first 4th generation light laboratory. Mogno, the future X-ray nano and microtomography beamline is being designed to execute and process experiments in only few seconds. For this reason, prototypes and automated systems have being tested and implemented in the current Brazilian Synchrotron Light Laboratory (LNLS) imaging beamline (IMX). An industrial robot was installed to allow fast sample exchange through an easy-to-use graphical user interface. Also, scripts using Python and Experimental Physics and Industrial Control System (EPICS) were implemented for automatic sample alignment, measurement and reconstruction. In addition, a flow cell for study dynamics and behaviour of fluids at the rock pore scale in time resolved experiments (4D tomography) is being projected.

Developing and deploying software systems for data acquisition and experiment control in a beamline laboratory can be a very challenging task. In certain cases there is the need to replace and modernize an existing system in order to accommodate substantial beamline upgrades. DonkiOrchestra is a TANGO-based framework for data acquisition and experiment control developed at Elettra Sincrotrone Trieste. The framework is based on an advanced software trigger-driven paradigm developed in-house. DonkiOrchestra is meant to be general and flexible enough to be adapted to the development needs of different laboratories and their data acquisition requirements. This presentation outlines the upgrade of the LabVIEW-based TwinMic beamline control system which hosts a unique soft X-ray transmission and emission microscope. Other than the technical demanding tasks of interfacing and controlling old and new instrumentation with DonkiOrchestra, this presentation discusses the various challenges of upgrading the software in a working synchrotron beamline.

Asynchronous data acquisition at the Inner-Shell Spectroscopy beamline at NSLS-II is performed using custom FPGA based I/O devices ("pizza-boxes"), which store and time stamp data using GPS based clock {*}. During motor scans, Incremental encoder signals corresponding to motion as well as analog detector signals are stored using EPICS IOCs. As each input creates a file with different timestamps, the data is first interpolated onto a common time grid. The energy scans are performed by a direct-drive monochromator, controlled with a Power PMAC controller. The motion is programmed to follow the trajectory with speed profiles corresponding to desired data density. The "pizza-boxes" that read analog signals are typically set to oversample the data stream, digitally improving the ADC resolution. Then the data is binned onto a energy grid with data spacing driven by desired point spacing. In order to organize everything in an easy-to-use platform, we developed XLive, a Python based GUI application. It can be used from the pre-experiment preparation to the data visualization and exporting, including beamline tuning and data acquisition.* R. Kadyrov et al., "Encoder Interface For NSLS-II Beam Line Motion Scanning Applications", ICALEPCS'15, Melbourne, Australia, October 2015, http://icalepcs.synchrotron.org.au/papers/wepgf080.pdf

This contribution reviews the novel LHC luminosity control software stack. All luminosity-related manipulations and scans in the LHC interaction points are managed by the LHC luminosity server, which enforces concurrency correctness and transactionality. Operational features include luminosity optimization scans to find the head-on position, luminosity levelling, and the execution of arbitrary scan patterns defined by the LHC experiments in a domain specific language. The LHC luminosity server also provides full built-in simulation capabilities for testing and development without affecting the real hardware. The performance of the software in 2016 and 2017 LHC operation is discussed and plans for further upgrades are presented.

The Laser MegaJoule (LMJ) is a 176-beam laser facility, located at the CEA CESTA Laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy to targets, for high energy density physics experiments, including fusion experiments. The first 8-beams bundle was operated in October 2014 and a new bundle was commissioned in October 2016. The next two bundles are on the way. The presentation gives an overview of the Personnel Safety System architecture, focusing on the wired safety subsystem named BT2. We describe the specific software tool used to develop wired safety functions. This tool simulates hardware and bus interfaces, helps writing technical specifications, conducts functional analysis, performs functional tests and generates documentation. All generated documentation and results from the tool are marked with a unique digital signature. We explain how the tool demonstrates SIL3 compliance of safety functions by integrating into a standard V-shaped development cycle.

Versatile Macromolecular in-situ (VMXi) is the first beamline at Diamond Light Source (DLS) to be entirely automated with no direct user interaction to set up and control experiments. This marks a radical departure from other beamlines at the facility and it has presented a significant design challenge to General Data Acquisition (GDA), the in-house software that manages beamline data collection. GDA has become a reactive controller for continual, uninterrupted processing of all user experiments. A major achievement has been to demonstrate that it is possible to successfully deliver a suitable architectural implementation for automation developed within a standard integrate development environment (IDE). There is no need for specialised software or a domain specific language for automation. The objective is to: review VMXi project with the emphasis on hardware configuration and experiment processing; describe the software and control architecture for automation; and provide a general set of guidelines for developing software for automation at a scientific facility.

A significant part of the experiments run at Alba Synchrotron* involve scans. The continuous scans were developed first ad hoc and latter the controls group dedicated important efforts to standardize them across the Alba instruments, enhancing the overall performance and allowing the users to better exploit the beamtime**. Sardana***, the experiment control software used at Alba, among other features, aims to provide a generic way of programming and executing continuous scans. This development just achieved a major milestone - an official version with a stable API. Recently the Alba instruments were successfully upgraded to profit from this release. In this paper we describe the evolution of these setups as well as the new continuous scan applications run at Alba. On the one hand, the most relevant hardware solutions are presented and assessed. On the other hand the Sardana software is evaluated in terms of its utility in building the continuous scans setups. Finally we discuss the future improvements plan designed to satisfy the ever-increasing requirements of the scientists.* http://www.albasynchrotron.es ** Z. Reszela et al. 'Implementation of Continuous Scans Used in Beamline Experiments at Alba Synchrotron', ICALEPCS2013 *** http://www.sardana-controls.org

Sirius is the new Brazilian Synchrotron and will be finished in 2018. Based on experiences at LNLS UVX light source along with researches and implementations, we present our new approach to develop user interfaces for beamlines control. On this process, the main tools explored are Python, Qt and some Python libraries: PyQt, PyDM and Py4syn. Powerful resources of these modules and Python straightforward coding guarantee flexible user interfaces: it is possible to combine graphical applications with intelligent control procedures. At UVX, EPICS and Python are software tools already used respectively for distributed control system and control routines. These routines often use Py4Syn, a library which provides high-level abstraction for devices manipulation. All these features will continue at Sirius. More recently PyQt turned out to be a compatible and intuitive tool to build GUI applications, binding Qt to Python. Also PyDM offers a practical framework to expose EPICS variables to PyQt. The result is a set of graphical and control libraries to support new user interfaces for Sirius beamlines.

The life cycle of an ILL instrument has two main stages. During the design of the instrument, a precise but static 3D model of the different components is developed. Then comes the exploitation of the instrument of which the control by the Nomad software allows scientific experiments to be performed. Almost all instruments at the ILL have moveable parts often hidden behind radiological protection elements such as heavy concrete walls or casemate. Massive elements of the sample environment like magnets and cryostats must be aligned in the beam. All those devices are able to collide with the surrounding environment. To avoid those types of accident, the instrument moves must be checked by a pre-experiment simulation that will reveal possible interferences. Nomad 3D is the application that links the design and the experiment aspects providing an animated 3D physical representation of the instrument while it moves. Collision detection algorithms will protect the moveable parts from crashes. During an experiment, it will augment the reality by enabling to "see" behind the walls. It will provide as well a precise virtual representation of the instrument during the simulations.

Funding:Centro Científico Tecnológico de Valparaíso (CONICYT FB-0821)The ALMA Common Software (ACS) framework provides Bulk Data Transfer (BDT) service implementations that need to be updated for new projects that will use ACS, such as the Cherenkov Telescope Array (CTA) and other projects, with most cases having quite different requirements than ALMA. We propose a new open-source BDT service for ACS based on ZeroMQ, that meets CTA data transfer specifications while maintaining retro-compatibility with the closed-source solution used in ALMA. The service uses the push-pull pattern for data transfer, the publisher-subscriber pattern for data control, and Protocol Buffers for data serialization, having also the option to integrate other serialization options easily. Besides complying with ACS interface definition to be used by ACS components and clients, the service provide an independent API to be used outside the ACS framework. Our experiments show a good compromise between throughput and computational effort, suggesting that the service could scale up in terms of number of producers, number of consumers and network bandwidth.

A high intensity neutrino beam produced at J-PARC is utilized by the T2K long baseline neutrino oscillation experiment. To generate the high intensity neutrino beam, a high intensity proton beam is extracted from the 30 GeV Main Ring synchrotron to the neutrino primary beamline. In the beamline, one mistaken shot can potentially do serious damage to beamline equipment. To avoid such a consequence, many beamline equipment interlocks which automatically stop the beam operation are implemented. If an interlock is activated, the beam operator references the operation manual, confirms the safety of the beamline equipment and resumes the beam operation. In order to improve the present system, we are developing an expert system for prompt and efficient understanding of the status of the beamline to quickly resume the beam operation. When an interlock is activated, the expert system references previous interlock patterns and infers what happened in the beamline. And the expert system will suggest how to resume the beam operation to the beam operator. We have developed and evaluated this expert system. In this talk, we will report the development status and initial results.

LHCb has introduced a novel online detector alignment and calibration for LHC Run II. This strategy allows for better trigger efficiency, better data quality and direct physics analysis at the trigger output. This implies: running a first High Level Trigger (HLT) pass synchronously with data taking and buffering locally its output; use the data collected at the beginning of the fill, or on a run-by-run basis, to determine the new alignment and calibration constants; run a second HLT pass on the buffered data using the new constants. Operationally, it represented a challenge: it required running different activities concurrently in the farm, starting at different times and load balanced depending on the LHC state. However, these activities are now an integral part of LHCb's dataflow, seamlessly integrated in the Experiment Control System and completely automated under the supervision of LHCb's 'Big Brother'. In total, around 60000 tasks run in the ~1600 nodes of the farm. Load balancing of tasks between activities takes less than 1 second. The mechanisms for configuring, scheduling and synchronizing different activities on the farm and in the experiment in general will be discussed.

Funding:U.S. Department of Energy's National Nuclear Security Administration, DE-NA0003525The Z Machine is the world's largest pulsed power machine, routinely delivering over 20 MA of electrical current to targets in support of US nuclear stockpile stewardship and in pursuit of inertial confinement fusion. The large-scale, multi-disciplinary nature of experiments ('shots') on the Z Machine requires resources and expertise from disparate organizations with independent functions and management, forming a Collaborative System-of-Systems. This structure, combined with the Emergent Knowledge Processes central to preparation and execution, creates significant challenges in planning and coordinating required activities leading up to a given experiment. The present work demonstrates an approach to scheduling planned activities on shot day to aid in coordinating workers among these different groups, using minimal information about activities' temporal relationships to form a Simple Temporal Network (STN). Historical data is mined, allowing a standard STN to be created for common activities, with the lower bounds between those activities defined. Activities are then scheduled at their earliest possible times to provide participants a time to check-in when interested.maschaf@sandia.gov

KEK injector linac has delivered electrons and positrons for particle physics and photon science experiments for more than 30 years. It is being upgraded for the SuperKEKB project, which aims at a 40-fold increase in luminosity over the previous project of KEKB, in order to increase our understanding of flavour physics. This project requires ten-times smaller emittance and five-times larger current in injection beam from the injector. And many hardware components are being tested and installed. Even during the 6-year upgrade, it was requested to inject beams into light sources storage rings of PF and PF-AR. Furthermore, the beam demanding approaches from those storage rings are different. SuperKEKB would demand highest performance, and unscheduled interruption may be acceptable if the performance would be improved. However, light sources expect a stable operation without any unscheduled break, mainly because most users run experiments for a short period. In order to deal with the both requirements several measures are taken for operation, construction and maintenance strategy including simultaneous top-up injections.

In X-ray experimental stations at SPring-8, beamline staff and experimental users sometimes need to reconfigure the measurement system for new experiments. Quick reconfiguration for the system is required and this resulted in elaborated work. Aim of DARUMA is to provide standardized procedure for constructing a flexible system of the data collection and control system for experimental stations. It utilizes the control framework MADOCA II* developed for the distributed control of accelerators and beamlines at SPring-8. Unified control procedure with abstracted text-based messaging helps to reduce significant time and cost for preparing the measurement system. DARUMA provides the applications for 2D detectors such as PILATUS, pulse motor and trigger system used in stations. Image data are collected with metadata into NoSQL database, Elasticsearch. Analysis tools for image such as online monitoring and offline analysis are also provided. User applications can be easily developed with Python and LabVIEW. DARUMA can be flexibly applied to experimental stations and is being implemented into BL03XU at SPring-8. We are also planning to introduce it into other experimental stations.* T. Matsumoto et al., Proceedings of ICALEPCS 2013, p.944

European Spallation Source (ESS), the next-generation neutron source facility, is expected to produce an immense amount of data. Various working groups mostly associated with the EU project BrightnESS aim at developing solutions for its data-intensive challenges. The real-time data management and aggregation is among the top priorities. The Apache KAFKA framework will be the base for ESS real-time distributed data streaming. One of the major challenges is the simulation of data streams from experimental data generation to data analysis and storage. This presentation outlines a simulation approach based on the DonkiOrchestra data acquisition and experiment control framework, re-purposed as a data streaming simulation system compatible with the ESS-KAFKA infrastructure.

In recent neutron scattering experiments, a large quantity and various kinds of experimental data are generated. In J-PARC MLF, it is possible to conduct many experiments under various conditions in a short time with high-intensity neutron beam and high-performance neutron instruments with a wealth of sample environmental equipment. Therefore, it is required to make an efficient and effective data analysis. Additionally, since it has been almost nine years from the beginning of operation in MLF, there are many equipment and system being up for renewal resulting in failure due to aging degradation. Since such kind of failure can lose precious beam time, failure or its sign should be early detected. MLF status analysis system based on the Elasticsearch, Logstash and Kibana (ELK) Stack, which is one of the web-based framework rapidly growing for big data analysis, ingests various data from neutron instruments in real time. It realizes to gain insight for decision-making such as data analysis and experiment as well as instrument maintenance by flexible user-based analysis and visualization. In this paper, we will report the overview and development status of our status analysis system.

In today's world, there is plenty of data being generated from various sources in different areas across economics, engineering and science. For instance, accelerators are able to generate 3 PB data just in one experiment. Synchrotrons industry is an example of the volume and velocity of data which data is too big to be analyzed at once. While some light sources can deal with 11 PB, they confront with data problems. The explosion of data become an important and serious issue in today's synchrotrons world. Totally, these data problems pose in different fields like storage, analytics, visualisation, monitoring and controlling. To override these problems, they prefer HDF5, grid computing, cloud computing and Hadoop/Hbase and NoSQL. Recently, big data takes a lot of attention from academic and industry places. We are looking for an appropriate and feasible solution for data issues in ILSF basically. Contemplating on Hadoop and other up-to-date tools and components is not out of mind as a stable solution. In this paper, we are evaluating big data tools and tested techniques in various light source around the world for data in beamlines studying the storage and analytics aspects.

The main goal of this paper is the presentation of Dcs ARchive MAnager for ALICE Experiment detector conditions data (DARMA), which is the updated version of the AMANDA 3 software currently used within ALICE experiment at CERN. The typical user of this system is either a physicist who performs further analysis on data acquired during the operation of the ALICE detector or an engineer, who analyzes the detector status between iterations of experiments. Based on the experience with the current system, the updated version aims to simplify the overall complexity of the previous version, which leads to simpler implementation, administration and portability of the system without sacrificing the functionality. DARMA is realized as an ASP. NET web page based on Model-View-Controller architecture and this paper provides a closer look at the design phase of the new backend structure in comparison to previous solution as well as the description of individual modules of the system.

The Australian Synchrotron, located in Clayton, Melbourne, is one of Australia's most important pieces of research infrastructure. After more than 10 years of operation, the beamlines at the Australian Synchrotron are well established and the demand for automation of research tasks is growing. Such tasks routinely involve the reduction of TB-scale data, online (realtime) analysis of the recorded data to guide experiments, and fully automated data management workflows. In order to meet these demands, a generic, distributed workflow system was developed. It is based on well-established Python libraries and tools. The individual tasks of a workflow are arranged in a directed acyclic graph and one or more directed acyclic graphs form a workflow. Workers consume the tasks, allowing the processing of a workflow to scale horizontally. Data can flow between tasks and a variety of specialised tasks is available. Lightflow has been released as open source on the Australian Synchrotron GitHub page

The timing and synchronization system at the ALBA synchrotron facility is based on the well-established event-based model broadly used in the particle accelerator facilities built in the last decade. In previous systems, based on signal model architecture, the master frequency was distributed using a direct analog signal and delayed at each target where the triggers were required. However, such strategy has proven to be extremely expensive and non-scalable. In the event-based model, the data stream is generated at a continuous rate, synchronously with the master clock oscillator of the accelerator. This strategy improves the flexibility for tuning the trigger parameters remotely and reduces the costs related to maintenance tasks. On the other hand, the absence of the pure RF signal distributed in the experimental stations implies much more complexity in the performance of time-resolved experiments. Abstract here explain how these difficulties have been overcome in the ALBA timing system in order to allow the signal reconstruction of the RF master frequency at the CIRCE beamline.

Sub-nsec precision time synchronization is requested for data-acquisition components distributed over up to tens of km2 in modern astroparticle experiments, like upcoming Gamma-Ray and Cosmic-Ray detector arrays, to ensure optimal triggering, pattern recognition and background rejection. The White-Rabbit (WR) standard for precision time and frequency transfer is well suited for this purpose. We present two multi-channel general-purpose TDC units, which are firmware-implemented on two widely used WR-nodes: the SPEC (Spartan 6) and ZEN (Zynq) boards. Their main features: TDCs with 1 nsec resolution (default), running deadtime-free and capable of local buffering and centralized level-2 trigger architectures. The TDC stamps pulses are in absolute TAI. With off-the-shelve mezzanine boards (5ChDIO-FMC-boards), up to 5 TDC channels are available per WR-node. Higher density, customized simple I/O boards allow to turn this into 8 to 32-channel units, with an excellent price to performance ratio. The TDC units have shown excellent long-term performance in a harsh environment application at TAIGA-HiSCORE/Siberia, for the Front-End DAQ and the central GPSDO clock facility.

Funding:Research supported by Polish Ministry of Science and Higher Education, founds for international co-financed projects for year 2017.The Master Oscillator system of the European XFEL was built using frequency synthesis techniques that were found to have the best phase noise performance. This includes low noise frequency multipliers and nonÂ­multiplying phase lock loops, incorporated in the system to shape its output phase noise spectrum. Jitter of the output signal strongly depends on phase noise transmittance of the PLL and suboptimal design can worsen it by orders of magnitude. Taking into consideration that the PLL open loop transmittance usually can be shaped in multiple ways, and that the accurate phase noise measurements can easily take more than 30 minutes, designing an automated tool becomes a necessity. For this purpose an approach to the tuning system construction was chosen in order to make the phase noise optimisation process simpler. This paper describes the optimisation of PLL synthesizer phase noise, done to improve the performance of the European XFEL MO. We present the phase noise optimisation process and achieved results.

Funding:This project has received funding from the European Research Council (ERC) under the European Union's Advanced Grant (AdG), 2014, ERC-2014-ADGAt the Fritz Haber Institute of the Max Planck Society a new very high speed scanning tunneling microscope (VHS-STM) is being set up to resolve glass dynamics (Cryvisil). We have been successfully using EPICS (v3) for many of our most important and larger experiments. However, for the new project, the data throughput to be achieved with EPICS (v3) is not sufficient. For this reason, we have completely aligned the experiment control for the STM to the new EPICS7 by using the new protocol pvAccess. The development versions of EPICS 3.16 and bundleCPP of the EPICSv4-suite are in use. Both of them will be the base components of the new EPICS7 Framework. The expected data rate is 300 MByte/s for up to 5 hrs to address the transition from a vitreous state to a crystal-line in real space over a wide range of temperatures ranging from cryogenic temperatures to 1500 K (*). In the poster we will show the control system setup (VMEbus, RTEMS-SMP, MVME6100, MVME2500, V375, SIS3316) and the used environment like ArchiverAppliance and pva2pva gateway.* http://cordis.europa.eu/project/rcn/198020en.html

A total of 33 gas control applications are currently in production in the LHC Experiments and the CERN accelerator complex. Each application contains around fifty synoptic views and hundreds of plots. In this paper, the entirely model-driven approach followed to generate all these HMIs is presented. The procedure implemented simplifies the creation of these graphical interfaces; allowing the propagation of changes to all visualizations at once in a coherent manner, thus reducing the long-term maintenance effort. The generation tool enables the creation of files of similar content based on templates, specific logic (rules) and variables written in simple user-defined XML files. This paper also presents the software design and the major evolution challenges currently faced, how the functions performed by the tool, as well as the technologies used in its implementation, have evolved while ensuring compatibility with the existing models.

Sardana and Taurus form a python software suite for Supervision, Control and Data Acquisition (SCADA) optimized for scientific installations. Sardana and Taurus are open source and deliver a substantial reduction in both time and cost associated to the design, development and support of control and data acquisition systems. The project was initially developed at ALBA and later evolved to an international collaboration driven by a community of users and developers from ALBA, DESY, MAXIV and Solaris as well as other institutes and private companies. The advantages of Sardana for its adoption by other institutes are: free and open source code, comprehensive workflow for enhancement proposals, a powerful environment for building and executing macros, optimized access to the hardware and a generic Graphical User Interface (Taurus) that can be customized for every application. Sardana and Taurus are currently based on the Tango Control System framework but also capable to inter-operate to some extend with other control systems like EPICS. The software suite scales from small laboratories to large scientific institutions, allowing users to use only some parts or employ it as a whole.

Funding:China Spallation Neutron Source and the science and technology project of Guangdong province under grand No. 2016B090918131'2017B090901007.This paper directs attention to the state machine design of the neutron scattering experiment control system in CSNS. The task of the software system is to complete the experiment on spectrometer, the purpose of the state machine design is to work with each other among the subsystems. Spectrometer experiment in CSNS spectrometer by internal control, data acquisition and analysis software, electronics, detector, sample environment and many other subsystems combined'this paper focuses on the introduction of the design details of state machine.Corresponding author:Jian ZHUANG, e-mail: zhuangj@ihep.ac.cn

nTOF is a pulsed neutron facility at CERN which studies neutron interactions as function of the energy. Neutrons are produced by a pulsed proton beam from the PS directed to a lead target. In a typical experiment, a sample is placed in the neutron beam and the reaction products are recorded. The typical output signals from the nTOF detectors are characterized by a train of pulses, each one corresponding to a different neutron energy interacting with the sample. The Data Acquisition System (DAQ) has been upgraded in 2014 and is characterized by challenging requirements as more than hundreds of 12 or 14-bit channels at a sampling frequency of 1 GS/s and 1.8 GS/s acquired simultaneously every 1.2 s for up to 100 ms. The amount of data to be managed can reach a peak of several GB/s. This paper describes the hardware's solutions as well as the software's architecture developed to ensure the proper synchronization between all the DAQ machines, the data's integrity, retrieval and analysis. The software modules and tools developed for the monitoring and control of the nTOF experimental areas and the DAQ operation are also detailed.

Due to the massive parallel operation modes at the GSI accelerators, a lot of accelerator setup and re-adjustment has to be made during a beam time. This is typically done manually and is very time-consuming. With the FAIR project the complexity of the facility increases furthermore and for efficiency reasons it is recommended to establish a high level of automation. Modern Accelerator Control Systems allow a fast access to both, accelerator settings and beam diagnostics data. This provides the opportunity together with the fast-switching magnets in GSI-beamlines to implement evolutionary algorithms for automated adjustment. A lightweight python interface to CERN Front-End Software Architecture (FESA) gave the opportunity to try this novel idea, fast and easy at the CRYRING@ESR injector. Furthermore, the python interface facilitates the work flow significantly as the evolutionary algorithms python package DEAP could be used. DEAP has been applied already in external optimization studies with particle tracking codes*. The first results and gained experience of an automatized optimization at the CRYRING@ESR injector are presented here.* S. Appel, O. Boine-Frankenheim, F. Petrov, Injection optimization in a heavy-ion synchrotron using genetic algorithms, Nucl. Instrum. Methods A, 852 (2017) pp. 73-79.

Funding:Brazilian Synchrotron Light Laboratory (LNLS); Brazilian Center for Research in Energy and Materials (CNPEM)Three-dimensional image reconstruction in X-ray computed tomography (XRCT) is a mathematical process that entirely depends on the alignment of the object of study. Small variations in pitch and roll angles and translational shift between center of rotation and center of detector can cause large deviations in the captured sinogram, resulting in a degraded 3D image. Most of the popular reconstruction algorithms are based on previous adjustments of the sinogram ray offset before the reconstruction process. This work presents an automatic method for shift and angle adjust of the center of rotation (COR) before the beginning of the experiment removing the need of setting geometrical parameters to achieve a reliable reconstruction. This method correlates different projections using Scale Invariant Feature Transform algorithm (SIFT) to align the experimental setup with sub-pixel precision and fast convergence.

The integration of the Data Acquisition, Offline Processing and Hardware Controls using MQTT has been proposed for the STAR Experiment at Brookhaven National Laboratory. Since the majority of the Control System for the STAR Experiment uses EPICS, this created the need to develop a way to bridge MQTT and Channel Access bidirectionally. Using CAFE C++ Channel Access library from PSI/SLS, we were able to develop such a MQTT-Channel Access bridge fairly easily. The prototype development for MQTT-Channel Access bridge is discussed here.