Validation of Computational Fluid Dynamics (CFD) solutions using experimental data is critical as CFD simulations are regularly used for site characterization and design analysis of Extremely Large Telescopes (ELT). Site testing data for wind, temperature and optical turbulence are used to validate the GMT CFD model configuration for the construction site at Las Campanas Peak in Northern Chile. CFD simulations, both steady-state and unsteady, combined with the corresponding seeing models are performed and estimates of the Ground Layer (GL) optical turbulence are calculated. Comparisons with wind, temperature and optical turbulence profiles are made that show a good match between simulated and observed data.

High spatial resolution thermal unsteady CFD simulations of LSST are performed and processed to provide image degradation due to dome seeing in FWHM. An analysis of the sensitivity of the image quality to certain important geometric features and aerothermal properties is presented. More specifically, the influence of the LSST vent light baffles and windscreen, the wind speed and the surface temperature of components such as the primary and secondary mirrors, the camera, the telescope structure and dome exterior is assessed and conclusions are drawn. The secondary mirror and camera surface temperatures are found to be among the most critical in minimizing LSST dome seeing.

Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. The current study constitutes an update of the ongoing effort to minimize simulation time and to make the computation tractable with available computational resources, to understand the subsequent physical and numerical limitations, and finally to develop the approach to mitigate the issues experienced. In particular, the paper describes a mesh and time-step independence study as well as the parameters that influence the slope of the Optical Path Difference (OPD) structure function and the TMT Normalized Point Source Sensitivity Image Quality metric in the context of thermal seeing.

The Giant Magellan Telescope (GMT) is currently planned for construction at Las Campanas Peak in northern Chile. Part of the next generation of extremely large telescopes, GMT will be one of the most powerful ground-based telescopes in operation in the world. Due to the larger aperture envisioned for GMT, characterization and control of the air flow entering and circulating within the enclosure will be required to maintain the highest possible image quality. Aero-thermal interactions between the site topography, enclosure, internal systems, and optics are complex. A key parameter for image quality is the thermal gradient between the terrain and the air mass entering the enclosure, and how quickly that gradient can be dissipated to equilibrium. Because the thermal gradients are highest near the ground, an important function of the GMT enclosure is to minimize the flow of ground-layer air entering the enclosure. By doing so, a more uniform air density above the telescope will enable higher image quality.

The design of the GMT lower enclosure is driven by equipment storage and access requirements but also directly impacts the origin and quality of the air entering the enclosure aperture. To ensure the highest quality GMT optical performance, Computational Fluid Dynamics (CFD) models and specialized analyses are utilized to evaluate several lower enclosure designs for their ability to limit the amount of ground-layer air entering the enclosure aperture. Lower enclosure designs with traditional solid outer walls promote the formation of “necklace” vortices, which tend to direct near-surface air, containing steep thermal gradients, into the enclosure aperture, potentially reducing image quality. Modifications to the lower enclosure, such as perforating the outer walls, are shown to suppress these necklace vortices at the expense of added structural complexity and/or reduced internal storage space. Initial isothermal CFD simulations defined the minimum height above terrain reached by the flow-path upwind of the observatory as a proxy to characterize the quality of air entering the enclosure, with lower heights associated with steeper thermal gradients. Based on these results, the most promising designs are further refined and subjected to additional higher fidelity CFD analyses, which includes a terrestrial thermal boundary layer. These simulations are also surveyed to quantify the aero-thermal environment along telescope optical paths, permitting evaluation and comparison of the predicted optical performance of the final candidate enclosure designs. Results from preliminary water tunnel testing of select lower-enclosure designs have increased our confidence in the CFD simulations.

Since 2015, W. M. Keck Observatory has been considering the possibility of conducting nighttime operations without any staff on the summit of Maunakea. A combination of methods has been used to assess the risk of this change in operations from different perspectives. System experts were surveyed to determine potential gaps in functionality that could create risk when operating or troubleshooting systems remotely. A hazard and risk analysis of use cases that describe nightly operations was conducted to identify risks to people, observatory equipment, and science quality and quantity that arise from the absence of people on the summit during the night. Risks were also identified by mining the night time fault reporting data from 2010-2016 to determine instances where hands on presence has been required on the summit to address issues. In the current state, these known issues would result in lost time and potential risk to equipment. The risk responses developed to address these risks have identified requirements on existing systems and for new capabilities to support unattended nighttime operations at WMKO.

In this paper we will describe how the development (design, build, integration, verification and installation) of a technically compliant Integral Field Spectrograph (IFS) can be planned and executed. Firstly we will show how one would develop the product breakdown structure (PBS) making use of a structured function-based systems engineering methodology based on systems thinking. The product breakdown structure is one of the primary outputs (deliverables) of the systems architecture design process and is a hierarchy of products implementing the physical architecture of the system. The system physical architecture is developed by implementing all the functions required over the life-time of the system in hardware and software. To finalise the system architecture the control and data flow to perform the required functions in the correct sequence will also need to be considered and implemented.

Once the system architecture has been developed it can be partitioned into a hierarchical product breakdown structure consisting of sub-systems, modules, assemblies, sub-assemblies, and components. Thereafter the product breakdowns structure can be partitioned into a logical work breakdown structure. By using the knowledge and understanding of the development workflows for each of the engineering disciplines required, a single product and work breakdown structure can be used to develop a robust project schedule. In addition, we will show how the processes of configuration management (CMII) are used to integrate the work elements of the various engineering disciplines into a coherent project plan to finalise the designs of parts, modules, assemblies, sub-systems or systems to a level where these parts can either be made or procured for further assembly and integration. Using project planning software such as Microsoft Project, the general shape and critical path of the project can be determined.

Typically, the development of ground based and space astronomical facilities are stretched over many years, even decades. Therefore it is easy to waste a lot of time during the early development phases of the project on nugatory and non-essential tasks. We have adopted the Agile software development methodology to prepare, execute and monitor short term plans (sprints) to ensure progress is being made and that all work elements contributes to the end goal of the project.

We illustrate how these novel techniques have and still are being used in the development of the HARMONI Integral Field Spectrograph. HARMONI was selected as one of the Extremely Large Telescope (ELT) first light instruments. The ELT will be the European Southern Observatory’s (ESO) next generation telescope and observatory and will be built in Chile on Cerra Armazones. The instrument completed its preliminary design phase and the team is now detailing the designs as part of the detailed design phase of the project.

A major objective of this paper is also to show that one single structure, namely the product breakdown structure, is all that is required to plan the development, construction, verification and validation, installation and commissioning of any scientific product. By associating the engineering artefacts required to either procure or build each of the components a robust project time-line can be develop by creating integrated work flows covering all the tasks required to progress the system from conception to a working instrument on sky.

One of the challenges in implementing an effective enterprise risk management system is establishing a common set of processes, tools, and standard terminology. The risk management system recently implemented at Gemini Observatory operates at four connected levels: project, program, division, and observatory, using a common set of processes, tools, templates and standard terminology. The risk management process flow begins with the leader of the corresponding level writing a risk management plan outlining the process for the size and type of the project, roles and responsibilities, budgeting, timing, risk tolerances, and reporting for that specific level. Within the actual managing of risks, the leader of each level performs risk identification, risk assessment, internal risk controls evaluation, mitigation planning, contingency planning, risk monitoring (includes mitigation and contingency plan execution as needed), and risk closure in collaboration with appropriate staff. Risk categories used for project, program, and division risks are subcategories of the Observatory risk categories of strategic, operational, financial, and compliance. The same risk register template is used for all levels modified only for the specific categories required for each level. The risk register displays a risk score based on impact and likelihood ratings on a five by five matrix. All risk registers require that internal controls be evaluated for reducing risk, that mitigation and contingency plans are created, and costs are estimated. Finally, staff are trained in risk management through a set of training modules designed for the system. This paper describes in detail the risk management system implemented at Gemini Observatory and discusses the first four months of use.

Proc. SPIE 10705, How to talk so your engineer will listen, how to listen so your scientist will talk: the human side of astronomical instrument development, 1070509 (10 July 2018); doi: 10.1117/12.2314087

Astronomical instrumentation development used to be much simpler than it is today. The quest for new discoveries and more light has driven the design and construction of new generations of ever-larger telescopes, which in turn created the need for correspondingly larger and more complex instruments. Large instrument teams composed of scientists and engineers from many technical disciplines have been brought together to design and build these instruments. Engineers trained in these disciplines have become key members of instrument development teams, taking responsibility for these areas of instrument design. With the engineers came an engineering culture and way of thinking that is often at odds with the scientific culture of astronomers. Project management techniques can help organize such an effort, but they have important limitations in a research environment and cannot ensure success. Training in the so-called “soft skills” can improve how a diverse team functions, but this, too, is not the complete answer. Only by immersing the engineer in an observing environment can one hope to overcome the cultural differences and inherent conflicts between scientists and engineers that can cause instrument projects to fail.

MEGARA is an IFU & MOS medium-resolution spectrograph that finished its commissioning at the GTC 10m telescope on August 2017. MEGARA is a fiber-fed high-resolution spectrograph with two major units, Fiber-MOS & Spectrograph, that are now located at the Folded-Cass F and Nasmyth-A foci of GTC respectively. These are linked by more than 1200 fibers 44.5m-length split between two observing modes, the LCB (Integral Field Unit, IFU) and a Multi- Object (MOS) capability with 92 robotic positioners each one provided with a mini-bundle of 7 fibers. The spectrograph can accommodate 18 VPHs (11 of them can be simultaneously mounted) covering the visible wavelength range at Resolving Powers between R=6000-20000. This paper presents the sequence of tasks carried out after Laboratory Acceptance at the Universidad Complutense de Madrid to move the whole instrument to the GTC. A detailed day-to-day plan was followed to disassemble, pack, transport, reintegrate the full instrument at the GTC and to verify performance to ensure the instrument was ready for commissioning. The lessons learnt are relevant to other double-focus instruments being developed such as WEAVE@WHT or PFS@Subaru.

The Commissioning Phase of the LSST Project is the final stage in the combined NSF and DOE funded LSST construction project. The LSST commission phase is planned to start early in 2020 and be completed near the end of 2022, ending with the LSST Observatory system ready to start survey operations. Commissioning includes the assembly of the three principal subsystems (Telescope, Camera and Data Management) into the LSST Observatory System and the integration and test (AI&T) efforts as well as the science verification activities. The LSST System AI&T and Commissioning Plan is driven by a combination of engineering and scientifically oriented activities to show compliance with technical requirements and readiness to conduct science operations (acquiring data, processing data, and serving data and derived data products to users). LSST System AI&T and Commissioning will be carried out over four phases of activity: Phase-0) Pre-commissioning preparations (work breakdown structure; Phase-1) Early System AI&T with a commissioning camera (ComCam); Phase-2) Full System AI&T when the LSST Science Camera is shipped to Chile, integrated on the telescope and the data management system (DMS) is exercised with full scale data; and Phase-3) Science Validation where a series of mini-surveys are used to characterize the system with respect to the survey performance specifications in the SRD/LSR and functionality of the, leading to operations readiness. The Science Validation Phase concludes with an Operations Readiness Review (ORR).
The LSST System Assembly, Integration and Test and Commissioning effort has been planned out over several phases The first phase of commissioning under Early AI&T is designed to test and verify the system level interfaces using ComCam – a 144Mpixel imager utilizing the same control components as the full science camera. During this period, the telescope active optics system will be brought into compliance with system requirements; the scheduler will be exercised and all safety checks verified for autonomous operation; and early DM algorithm testing will be performed with on-sky data from ComCam using a commissioning computing cluster at the Base Facility.
The second phase of activities under Full System AI&T is designed to complete the technical integration of the three principal subsystems and EPO, show full compliance with system level requirements as detailed in the Observatory System Specifications and system level interface control documents, and provide full scale data for further DM/EPO software and algorithmic testing and development. System level requirements that flow directly to subsystems without any further derivation will be tested for compliance, at the subsystem level and below, under the supervision of Project Systems Engineering. This document includes the general approach and goals for these tests. It is expected that roughly four (4) months into the Full System AI&T phase the telescope and camera will be fully integrated and routinely producing science grade images over the full field of view (FOV), at which point “System First Light” will be declared. Following System First Light will be an intensive data acquisition period design to test the image processing pipelines and validate the derived science products that are to be delivered by the LSST survey.
The third and final phase of activities under Science Validation is designed to fully characterize the system performance specifications detailed in LSST System Requirements Document and the range of demonstrated performance per the LSST Science Requirements. These activities are based on the measured “On-Sky” performance and informed simulations of the LSST system.
In this paper we describe the inputs and assumptions to the commissioning plan, a summary of the activities in each phase, management strategies and expected outcomes.

This paper presents the plan for the system-level requirements verification of the ESO ELT. It describes the process to undertake this already ongoing activity and the tools supporting such process.

Verification methods (design, analysis, inspection and/or test), verification level (whether the concerned requirement is verified at system or subsystem level), milestones (at which stage in the programme the requirement is verified) and constraints, when applicable, are discussed. Particular emphasis is put on addressing how the key system requirements, i.e., the ones with a larger impact on the science return, are planned to be verified. Also, special attention is given to describe the model approach in place to help in the system-level verification activity.

Finally, some conclusions and lessons learned extracted so far from the system requirements verification activity are summarized.

The Integration and Verification Testing of the Large Synoptic Survey Telescope (LSST) Camera is described. The LSST Camera will be the largest astronomical camera ever constructed, featuring a 3.2 giga-pixel focal plane mosaic of 189 CCDs with in-vacuum controllers and readout, dedicated guider and wavefront CCDs, a three element corrector with a 1.6-meter diameter initial optic, six optical filters covering wavelengths from 320 to 1000 nm with a novel filter exchange mechanism, and camera-control and data acquisition capable of digitizing each image in two seconds. In this paper, we describe the integration processes under way to assemble the Camera and the associated verification testing program. The Camera assembly proceeds along two parallel paths: one for the focal plane and cryostat and the other for the Camera structure itself. A range of verification tests will be performed interspersed with assembly to verify design requirements with a test-as-you-build methodology. Ultimately, the cryostat will be installed into the Camera structure as the two assembly paths merge, and a suite of final Camera system tests performed. The LSST Camera is scheduled for completion and delivery to the LSST observatory in 2020.

The Large Synoptic Survey Telescope, under construction in Chile, is an 8.4 m optical survey telescope with a dedicated 3.2 Giga-pixel camera. The design and construction of the camera is spearheaded at SLAC National Accelerator Laboratory and here we present a general overview of the camera integration and test activities. An overview of the methodologies used for the planning and management of this subsystem will be given, along with a high-level summary of the status of the major pieces of I&T hardware. Finally a brief update will be given on the current state of the LSST Camera integration and testing program.

MAORY (Multi-conjugate Adaptive Optics RelaY) and MICADO (MCAO Imaging CamerA for Deep Observations) will perform the science in the Multi-conjugate Adaptive Optics mode of the ELT (Extremely Large Telescope). One of their goals is the multi-object differential astrometry which requires low optical distortion and diffraction limited aberrations. To align MAORY, an automate method will be used during the integration of the instrument and could be part of the calibration strategy at the ELT site. This paper describes the method and the ray-tracing simulations carried out to validate the algorithm. Even in presence of different error sources, the method works in a large range of misalignments bringing the system close to the nominal performances.

HARMONI is the first light integral field spectrograph for the ELT. It includes a core 'science instrument' -- the IFS -- supported by a range of other systems, in particular adaptive optics sensors for SCAO and LTAO. The latter was, for many years, treated as an entirely separate instrument with the ELT observatory architecture. A better understanding of the technical challenges, together with a changing political and funding environment, led to merger of the two projects in 2014. The project now rates over 400FTE with a commensurately large hardware budget.
The IFS part of the instrument, at least in function, remains largerly unchanged since 2009 when the consortium completed a Phase A study as part of the (then 42m) E-ELT instrument studies. The structure of the consortium was essentially fixed then, and many firm (and soft) contractual agreements and understandings limit the flexibility to match work to product. Over the years however, as the ELT project has evolved, the design and scope of HARMONI has changed and expanded. This has brought new partners into the consortium, changed the design concept of the instrument, introduced new interfaces, and updated requirements. To further complicate matters, as of PDR (late 2017) the final scope of the project is still open due to funding uncertainties.
All of these factors have made the development of a system architecture particularly challenging. The architecture of 2009 - whilst ultimately linked to the structure of the consortium - is no longer fit for the technical purpose. A revised system architecture, and the resulting product breakdown structure, have had to be carefully adapted to satisfy a wide range of constraints. It must be solid enough to allow the project to progress clearly, but flexible enough to deal with what changes may lie ahead.
We have applied systems engineering processes to develop and architecture which is clean and robust, whilst including some inevitable compromise driven by overall project considerations. The paper will describe the processes we have followed, how the architecture has evolved, and how we have dealt with constraints and compromises forced by the existing consortium structure. We will present the baseline architecture for HARMONI, and explain how this maps onto other areas of the project and the overall instrument development process. This is an example of system architecting in the real world of moving targets and immovable obstructions.

Maunakea Spectroscopic Explorer will be a 10-m class highly multiplexed survey telescope, including a segmented primary mirror and robotic fiber positioners at the prime focus. MSE will replace the Canada France Hawaii Telescope (CFHT) on the summit of Mauna Kea, Hawaii. The multiplexing includes an array of over four thousand fibers feeding banks of spectrographs several tens of meters away.

We present an overview of the requirements flow-down for MSE, from Science Requirements Document to Observatory Requirements Document. We have developed the system performance budgets, along with updating the budget architecture of our evolving project. We have also identified the links between subsystems and system budgets (and subsequently science requirements) and included system budget that are unique to MSE as a fiber-fed facility.

All of this has led to a set of Observatory Requirements that is fully consistent with the Science Requirements.

The Giant Magellan Telescope (GMT) is going to be a complex and versatile exploration machine, which makes systems engineering GMT challenging. This paper addresses three particularly critical aspects of systems engineering: a general and flexible definition of the observatory, image quality specifications, and compliance assessment for statistical performance requirements. The observatory definition and its high-level flow down is captured in a set of Foundation Documents, from level-1 (stakeholders’ intentions and the objective specifications of science data) through level-2 (engineering specification) to level-3 (architectural design and operational concepts). Image quality requirements for atmospheric resolution modes are balancing observing efficiency considerations and system capabilities enabling exceptional image quality under the best conditions. To address statistical specifications, requirements validation and early design verification is carried out in an integrated modeling framework that takes advantage of sequential Monte- Carlo analysis over the Standard Year, representing our understanding of correlated summit conditions and GMT operational constraints.

The Cherenkov Telescope Array (CTA) is the next generation ground-based observatory for gamma-ray astronomy at very high energies. With more than 100 telescopes at two sites, CTA will be the world’s largest and most sensitive highenergy gamma-ray observatory covering the full sky with a northern array located at the Roque de los Muchachos astronomical observatory on the island of La Palma (Spain) and a southern array near the European Southern Observatory site at Paranal (Chile). Three classes of telescope types with imaging Cherenkov cameras, calibration, clock and timing systems, site infrastructure as well as control and data handling/processing software, developed in a large international collaboration, are required to build the CTA observatory system.

As a large and international collaboration, with almost all hardware and software elements to be delivered as in-kind contributions by participating institutes to build a complex observatory system on two sites, CTA faces quite a few challenges in the areas of systems engineering and project management.

This paper aims to compare various aspects of systems engineering between space and ground-based astronomy projects. After a brief roundup of the development of systems engineering practices in space, we discuss the rapidly progressing adoption of similar methods in complex ground projects. Special attention is given to the analysis of increasing system complexity on ground which leads to a commensurate increase in project effort and cost. The importance of development of enabling technologies and improvement of engineering methodologies are discussed by specific examples.

The ESO Technology Development programme consists of a series of projects aimed at developing key future technologies for astronomy and the ESO programme in particular. Key projects include deformable mirrors, lasers, detectors, real time computers and coatings. Working with industry in these areas requires careful attention to the analysis and management of risk.

The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes and designs, used for covering different energy domains. These telescopes interact with other systems (e.g. central observation execution software, infrastructure, etc.) fundamental for the Observatory operations. We have created a set of about 70 use cases (UCs) that describe the different type of interactions of a generic CTA telescope with its surrounding systems. These UCs describe different scenarios, from normal night operations to reactions to hazardous situations. Thanks to these UCs we can refine requirements, identify interfaces and specify the expected behaviour of the telescopes. The UCs are also an important ingredient to prepare the test cases for the integration and validation process of the telescopes into the CTA Observatory. This contribution summarises the methodology and tooling we have followed to identify and specify these UCs, as well as the main obtained results.

An important part of any new observatory construction project is to address the needs of long-term operations. The Giant Magellan Telescope Project, currently in the design phase, addresses these needs in several ways. We have hired an operations expert and have several other staff experienced in various aspects of observatory operations. Our high-level documents include an Observatory Operations Concept, describing how the observatory and its subsystems will work together to provide a high-level of service and performance to the astronomers that form their user base. It is also important to estimate the resources required for operations, and to develop a plan for the transition from project to operations.
Developing system behaviors early in the design process is an important tool to acquire more knowledge about interfaces and uncover requirements. The GMT design has progressed far enough that we are mostly writing more evolved behaviors, using the knowledge of existing requirements and interfaces, and describing in more detail the behaviors. Nevertheless, we are uncovering new requirements, and informing preliminary hazard analysis as we describe various operational tasks. Over 300 behaviors were identified by the subsystem teams in collaboration with the Systems Engineering team.
The Observatory Operations Concept outlines a set of “critical” behaviors, which those which require high reliability, the most resources, or are otherwise notable. These include exchanges of the cells holding the primary mirror segments, mirror recoating, installation and maintenance of instruments, cleaning the major optics, and recovering from major seismic events.
We are taking a similar approach to that used by software and controls, where we identify the “high level user stories” to inform the development regarding states, status, actors, actions and events.
Software and controls is taking a similar approach, using high-level “user stories” to inform their software interfaces.
This contribution will describe in more detail the goals of the Observatory Operations Concept and system behaviors. It will describe the tools and products of system behaviors. It will provide some examples of how system behaviors studies have led to significant requirements robustness and better informed design decisions.

The Large Synoptic Survey Telescope (LSST) is under construction in Chile. To make the delivered system meet the science goals, the project defines a set of performance metrics, and constantly monitors the system performance by evaluating the metrics against their requirements. In this paper, we describe the latest updates to the comprehensive tool set we have developed for evaluating the LSST system performance, which we collectively refer to as the LSST integrated model, and recent work on utilizing these tools for system verification. We also broaden our set of performance metrics and introduce an integrated-étendue-based metrics framework, which is useful for not just system verification, but also mitigation and optimization. Most of the major metrics currently being monitored fit under this framework, including image quality, system throughput, the single-visit point source 5σdetection limit, etc. We also mointor the Point Spread Function (PSF) ellipticity, which isn't part of this metrics framework, but is an output of the integrated model.

A simulation-based systems engineering framework is defined to design, optimize and simulate complex, large scale systems under uncertainty through integrated models encompassing multiple disciplines such as, for example, structural-thermal-optical. A model's input parameter uncertainties are rigorously quantified upstream of the model through literature reviews, experiments or elicitation from subject matter experts and then propagated through the model to determine their influence on specific quantities of interest requested in output. A variance-based global sensitivity analysis is used to identify and rank the critical system parameters, based on their contribution to the variance of the quantities of interest. These parameters can then be targeted by additional research through optimal parameter inference experiments in order to reduce their variability. By so doing, one incorporates uncertainty in the model and updates the model iteratively as new parameter information becomes available. This process increases one's knowledge about the system, its subcomponents and all of their mutual interactions, and represents a crucial commodity when important design decisions are to be made. When applied early in a project's life-cycle, it can potentially reduce mission costs related to resources (e.g., mass or power) and processes (e.g., design, verification and validation). As a case study, this paper presents results from the application of this framework to the integrated model of the James Webb Space Telescope, used to ultimately revise the model uncertainty factors applied to nominal temperature predictions for the benchmark hot-to-cold slew thermal analysis case.

A detailed Computational Fluid Dynamics (CFD) model for the Giant Magellan Telescope (GMT) telescope has been developed and used to simulated and analyze the aero-optical environment around the observatory. The developed model accounts for the major observatory components such as the primary (M1) and secondary (M2) mirrors, the M2 supporting truss, other subcomponents of the telescope mount, and enclosure building along with the auxiliary and site support buildings on the summit. A large topographical area around the installation site is included. This study evaluates three different lower enclosure designs; a closed soffit, an open soffit and a perforated ring-wall (partially closed soffit). Timevarying CFD simulations provide detailed flow and temperature fields along the optical path, which are subsequently used to compute optical parameters such as Optical Path Difference (OPD) maps and Point Source Sensitivity normalized (PSSn), the GMT Image Quality (IQ) metric. Results show that enclosure-induced turbulent flow patterns and refractive index variations have a greater influence on optical performance compared to flow and thermal behavior external to the enclosure. Instantaneous and mean PSSn values obtained for the three soffit configurations show minor differences, indicating that the lower enclosure design has minimal impact on observatory optical performance for the simulated operating conditions.

We present an estimate of the optical performance of the Thirty Meter Telescope (TMT) after execution of the full telescope alignment plan. The TMT alignment is performed by the Global Metrology System (GMS) and the Alignment and Phasing System (APS). The GMS first measures the locations of the telescope optics and instruments as a function of elevation angle. These initial measurements will be used to adjust the optics positions and build initial elevation look-up tables. Then the telescope is aligned using starlight as the input for the APS at multiple elevation angles. APS measurements are used to refine the telescope alignment to build elevation and temperature dependent look-up tables. Due to the number of degrees of freedom in the telescope (over 10,000), the ability of the primary mirror to correct aberrations on other optics, the tight optical performance requirements and the multiple instrument locations, it is challenging to develop, test and validate these alignment procedures. In this paper, we consider several GMS and APS operational scenarios. We apply the alignment procedures to the model-generated TMT, which consists of various quasi-static errors such as polishing errors, passive supports errors, thermal and gravity deformations and installation position errors. Using an integrated optical model and Monte-Carlo framework, we evaluate the TMT's aligned states using optical performance metrics at multiple instrument and field of view locations. The optical performance metrics include the Normalized Point Source Sensitivity (PSSN), RMS wavefront error before and after Adaptive Optics (AO) correction, pupil position change, and plate scale distortion.

This paper describes the evolution of the processes, methodologies and tools developed and utilized on the Large Synoptic Survey Telescope (LSST) project that provide a complete end-to-end environment for verification planning, execution, and reporting. LSST utilizes No Magic’s MagicDraw Cameo Systems Modeler tool as the core tool for systems modeling, a Jira-based test case/test procedure/test plan tool called Test Management for Jira for verification execution, and Intercax’s Syndeia tool for bi-directional synchronization of data between Cameo Systems Modeler and Jira. Several additional supporting tools and services are also described to round out a complete solution. The paper describes the project’s needs, overall software platform architecture, and customizations developed to provide the end to- end solution.

Proc. SPIE 10705, Verifying Interfaces and generating interface control documents for the alignment and phasing subsystem of the Thirty Meter Telescope from a system model in SysML, 107050V (10 July 2018); doi: 10.1117/12.2310184

This paper presents a novel method for verifying interfaces and generating interface control documents (ICDs) from a system model in SysMLTM. In systems and software engineering, ICDs are key artifacts that specify the interface(s) to a system or subsystem, and are used to control the documentation of these interfaces. ICDs enable independent teams to develop connecting systems that use the specified interfaces. In the context of the Thirty Meter Telescope (TMT), interface control documents also act as contracts for delivered subsystems. The Alignment and Phasing system (APS) is one such subsystem. APS is required to implement a particular interface, and formulates requirements for the interfaces to be provided by other components of TMT that interface with APS. As the design of APS matures, these interfaces are frequently refined, making it necessary for related ICDs to be updated. In current systems engineering practice, ICDs are maintained manually. This manual maintenance can lead to a loss in integrity and accuracy of the documents over time, resulting in the documents no longer reflecting the actual state of the interfaces of a system. We show how a system model in SysMLTM can be used to generate ICDs automatically. The method is demonstrated through application to interface control documents pertaining to APS. Specifically, we apply the method to the interface of APS to the primary mirror control system (M1CS) and of APS to the Telescope Control System (TCS). We evaluate the newly introduced method through application to two case studies.

The OpenSE Cookbook is an open-sourced collection of patterns, procedures, and best practices targeted for systems engineers who seek guidance on applying model-based and executable systems engineering (MBSE) using SysML. Its content has emerged from the system level modeling effort on the European Framework Program 6 (FP6) and the Thirty Meter Telescope (TMT). The TMT MBSE approach applied the Executable Systems Engineering Method (ESEM) and the open-source Engineering Environment (OpenMBEE) to specify, analyze, and verify requirements of TMT’s Alignment and Phasing System (APS) and the Narrow Field Infrared Adaptive Optics System (NFIRAOS). In these applications, implicit dependencies are made explicit in a formal model through the use of ESEM, OpenMBEE, and SysML modeling constructs. The value proposition for applying this MBSE approach was to establish precise requirements and fine-grained traceability to system designs, and to verify key requirements beginning early in development. The integration of ESEM and the OpenMBEE tooling infrastructure (providing linked-data and web-operability) is a significant added value for the MBSE approach. The APS is responsible for the overall pre-adaptive optics wavefront quality, using starlight to measure wavefront errors and align the TMT optics. In the formally integrated and executable SysML model, simulations are performed to analyze the impact of changed requirements and verify specified constraints for various operational scenarios.

The APS team used several modeling patterns to capture information such as the requirements, the operational scenarios, involved subsystems and their interaction points, the estimated or required time durations, and the mass and power consumption. Adaptive optics systems are designed to sense real-time atmospheric turbulence and correct the telescope’s optical beam to remove its effect. The system model for the adaptive optics operational modes was developed to capture sequence behaviors and operational scenarios to run Monte-Carlo simulations for verifying acquisition time, observing efficiency, and operational behavior requirements. The model is particularly useful for investigating the effect of parallelization, identifying interface issues, and re-ordering sequence acquisition tasks. A former version of the Cookbook (which is now updated to MBSE challenges, goals, and lessons learned) included modeling guidelines and conventions for all system aspects, hierarchy levels, and views, which were developed during for the Active Phasing Experiment (APE), an opto-mechatronical system technology demonstrator for the Extremely Large Telescope (ELT). The Cookbook utilizes the above mentioned system models as real-world case-studies to demonstrate and document the applications of the recipes, providing also instructional examples and addressing the available tooling support. The Cookbook is accompanied by a number of SysML models and aodel libraries which facilitate model authoring and maintenance. The Cookbook covers the different aspects of Systems Engineering such as management of Requirements, Design (behavior and structure), Interfaces, Interdisciplinary Integration, Analysis, Trade Studies, and Technical Resources. This paper presents the background, motivation, architecture, and highlights some key content of the Cookbook. For example, interface management, error budget management, requirements verification, Monte Carlo driven analysis, and timing analysis of operational scenarios. The paper discusses how the capabilities of OpenMBEE contributed significantly to the adoption of executable systems engineering.

OSIRIS (Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy) Multi-Object Spectroscopy (MOS) observing mode is available to the science community of the GTC (Gran Telescopio Canarias) from early 2014. The MOS production line allows the researchers to specify a MOS observation in a self-contained way by using a software tool, in order to bridge the gap between science aims and the multiplexed spectroscopic data gathering at the telescope. It gives the researcher the guarantee that the observation will perform as expected, thanks to the computer vision based quality control checks of the masks produced.

This article describes the architecture of the production line of MOS observations, its activities, actors, and subsystems, and how all of them mesh so that the production line works efficiently and effectively, in an automatized way, using a model-centric approach where the observation design acts as the single source of truth for the entire organization.

The Cherenkov Telescope Array (CTA) is planned as the first ground-based gamma-ray observatory open to the worldwide physics community. The CTA Observatory (CTAO) will consist of arrays of up to 100 telescopes at two sites, one in the Northern and one in the Southern hemisphere, as well as complex and distributed software systems for an efficient operation of the arrays and for the management and scientific exploitation of the CTA data. One of the challenges in the design of such a large installation is to ensure that all the systems that compose the CTAO have well-defined scope and identified interfaces, allowing it to work reliably as a seamless whole. In this contribution, we provide an overview on a methodology for a model-based architecture approach, tailored to the CTA needs, with the main goals to (i) capture the stakeholder interactions with the CTAO, (ii) capture the processes and activities that will be required to successfully operate the CTAO and meet stakeholder expectations, including science operations and maintenance, (iii) agree on a functional decomposition of the CTAO into (sub-)systems and an allocation of the functionality to the (sub-)systems to assign responsibilities and identify interfaces. To accomplish this, we have developed an architecture approach based on process-based system scoping and using a notation based on the SysML and UML formalisms. The different views of the architecture model are presented, each focusing on different aspects of the CTAO. These views contain, among others, stakeholders and project objectives, activity diagrams for describing the CTAO processes, the context and structure of the CTAO system and sub-systems, and their relationships. In this contribution, we will focus on the methodology with a few selected examples.

The Giant Magellan Telescope (GMT) will feature two Gregorian secondary mirrors, an adaptive secondary mirror (ASM) and a fast-steering secondary mirror (FSM). The FSM has an effective diameter of 3.2 m and consists of seven 1.1 m diameter circular segments, which are conjugated 1:1 to the seven 8.4m segments of the primary. Each FSM segment contains a tip-tilt capability for fast guiding to attenuate telescope wind shake and mount control jitter. This tiptilt capability thus enhances performance of the telescope in the seeing limited observation mode. The tip-tilt motion of the mirror is produced by three piezo actuators. In this paper we present a simulation model of the tip-tilt system which focuses on the piezo-actuators. The model includes hysteresis effects in the piezo elements and the position feedback control loop.

The 25.4 m Giant Magellan Telescope (GMT) consists of seven 8.4 m primary mirror (M1) segments with matching segmentation of the Gregorian secondary mirror (M2). The GMT will operate in four basic optical correction modes, Natural Seeing (NS), Ground Layer Adaptive Optics (GLAO), Natural Guide Star Adaptive Optics (NGAO) and Laser Tomography Adaptive Optics (LTAO). Each of these modes must deliver a specified combination of image quality, field of view, and sky coverage over a range of environmental conditions.
With a double segmented mirror configuration, even in the simplest of the correction modes the GMT includes over one thousand controllable degrees of freedom. Exogenous and internal sources of disturbances and noise over these degrees of freedom will limit the image quality. The different ranges of motion and bandwidth of the different degrees of freedom enable a cascade correction of the wavefront error, successively rejecting global to local disturbances. This frequency and spatial separation allows allocating the disturbances in stages, considering the residuals of the low spatial and temporal corrections as the disturbance for the high order corrections.
While a first approach can consider the analysis of systems in isolation in order to allocate coarse budgets, a complex control system such as that of the GMT requires a Dynamic Optics Simulation (DOS) to account for the real interactions between the controlled plant and the controllers. For example, some control loops such as the M1 figure control system will have an update rate of only 0.03 Hz, while the Adaptive Secondary Mirror (ASM) will be updated at 1kHz . The DOS is an end-to-end simulation environment that brings together optics, finite element models (FEM), mechanical motions, surface deformations and control models applied to the GMT main optics. At the center of the DOS there is an optics propagation module with both geometric ray tracing and Fourier propagation capability. The dynamic response of the telescope mount and the large M1 segments has been modeled by applying Craig-Bampton reduction analysis to finite element models. These reductions have been reordered in a second order form, allowing higher computational efficiency than traditional state space models. Each M1 segment is controlled by an array of 330 actuators with realistic precision, noise and discretization errors. The structural dynamics model can be used in time domain simulations that account for all the non-linear effects of actuators and sensors, or in a linear frequency domain model to run more efficiently stochastic analyses.
A high resolution Computational Fluid Dynamics (CFD) model has been developed for simulating unsteady turbulent flow over the optical system. These simulations provide unsteady pressure fluctuations over the main optics and effects of varying index of refraction in the optical path for different operating conditions. These quantities are subsequently used for estimating wind induced image jitter and thermal (dome and mirror) seeing by applying the combined structural, control, and optical models described above.
The DOS allows GMT to understand the sensitivity of image quality to any of the thousands of parameters of our plant and control system., Due to the cascade layers of control loops, DOS allows specifying design parameters without over-constraining the solution space.

Extremely large telescopes are characterized by high degree of freedom control systems used to coordinate multiple segments and mirrors. The dynamics can interact so that single loop requirements do not provide sufficient stability and performance robustness. This paper reviews the relevant multivariable robustness and performance methods, and presents examples from Giant Magellan Telescope (GMT) motion control systems.
Singular value bounds of multivariable frequency responses are well developed computational tools that provide a methodology that can be used for telescope analysis. The singular value bounds are relevant because they give the maximum sensitivity for coupled, multivariable systems. Singular values are recommended for analysis, and can be considered for requirements. With sufficient numbers of sensors, these multivariable bounds are measurable and hence can be validated. There is a practical reason for using multivariable tools, to combine many, perhaps thousands of transfer functions and/or measurements that can be compared against singular value bounds.
The first example is the AZ/EL mount control. Coupling tends to be small, hence single-input analysis tools suffice, nevertheless the mount control system provides a good introduction to multivariable methodology. The maximum singular value of both the sensitivity and complementary sensitivity functions provide a good bound for crossover robustness near the position control bandwidth, typically +6 dB near 1 Hz. The high frequency region of the complementary sensitivity function provides a good bound on robustness with respect to unmodeled structural dynamics, typically–40dB above the maximum frequency of the finite element modes.
Similar multivariable stability robustness bounds can be applied to position control of the M2 assembly, for both the macrocell relative to the top end assembly, and each mirror subassembly relative to the macrocell. The latter includes control of the Fast Steering Mirror, where 21 PZT actuators control the tip and tilt of seven secondary mirrors. The risk is the 21 PZT control loops meet good classical phase and gain margin robustness metrics when measured as individual, single-input-single-output systems, but the multivariable bound exceeds either the +6 dB or – 40 dB bound. This can occur due to interaction in the macrocell, the structure used to support the individual segments. Whether or not this interaction occurs depends on the bandwidth of the control system relative to the structural modes of the macrocell. This tradeoff is important, and the maximum singular value is a good tool to test for this sensitivity.

The Giant Magellan Telescope (GMT) M1 Subsystem includes the seven 8.4 meter M1 (Primary) Segment Mirrors and the steel mirror cell weldments which house the mirror active support and thermal control systems. The segmented nature of the primary mirror and the requirement that each of the six off-axis segment cells be interchangeable impose requirements on the range of motion and control beyond those applicable to the M1 subsystems on 6.5m and 8.4m telescopes using the structured honeycomb mirrors.The subsystem is both technically challenging to design and costly to produce. The M1 Subsystem is allocated a large fraction of the GMT natural seeing image quality budget. Support actuator tolerances, range of motion, accuracy, and precision, as well as the ability of the thermal control system to regulate the primary mirror temperature, all have a significant effect on the image quality. The authors have developed several linear models to estimate the effect of force and moment errors at the M1 Segment Active Supports and the non-uniformity of temperature across M1 segments on the delivered image quality. These results are coupled to the Wavefront Control Subsystem model and are integrated into the GMT system-level simulations to produce a final image quality budget and to quantify the effectiveness of the Wavefront Control Subsystem to compensate for M1 Subsystem error. In this paper, we present the modeling process and preliminary performance results obtained using the models.

A FEM model of the telescope has been created to analyze the telescope behavior against all the significant actions: gravity, wind, seism, thermal, manufacturing and alignment errors. The model includes the telescope pier and the pier foundations.

A Wind Tunnel Test campaign has been carried out on a scaled model of the Dome and Telescope to assess the wind action on the structures. The campaign has been supported by a detailed CFD analysis with several cases of Dome orientation, Dome configuration, wind velocity and turbulence intensity.

A State Space model of the telescope has been set up to perform the Servo analysis of the azimuth and altitude control system. A comprehensive State Space model of the Dome, the Ground, and the Main Structure has been set up to perform the vibration analysis of the whole observatory (including the machinery in the auxiliary building and the erratic vibrations from the ground).

The present paper provides a synthetic description of the generated models and the most significant results.

We present the updated design and architecture of the End-to-End simulator model of the high resolution spectrograph HIRES for the future Extremely Large Telescope (ELT). The model allows to simulate the propagation of photons starting from the scientific object of interest up to the detector, allowing to evaluate the performance impact of the different parameters in the spectrograph design. The model also includes a calibration light module, suitable to evaluate data reduction requirements. In this paper, we will detail the architecture of the simulator and the computational model which are strongly characterized by modularity and flexibility that will be crucial in the next generation instrumentation for projects such as the ELT due to of the high complexity and long-time design and development. We also highlight the Cloud Computing Architecture adopted for this software based on Amazon Web Services (AWS). We also present synthetic images obtained with the current version of the End-to-End simulator based on the requirements for ELTHIRES (especially high radial velocity accuracy) that are then ingested in the Data reduction Software (DRS) of CRIRES+ as case study.

Balloon based telescopes represent an opportunity to observe science in an environment with almost no atmospheric effects. However, balloon based platforms include a wide range of thermal environments as well as pointing a lightweight telescope over a large elevation range. The Gondola for High Altitude Planetary Science (GHAPS) was designed to provide nearly diffraction limited performance observations over the visible and infrared spectrum with a 1- meter aperture. To achieve such performance, detailed Structural Thermal Optical Performance (STOP) was used to predict telescope performance. Software was built to automate the process of analysis, enabling thermal, structural and optical analyses to be executed quickly with less effort. The end result was the capability to analyze both generic operating conditions and Design Reference Mission conditions, producing predictions that could be used to evaluate the quality of science return.

Phase diversity is a focal plane wavefront sensing technique that allows to retrieve the phase aberration introduced by a camera starting from two images of whatever object, one of which (the diverse image) is intentionally corrupted by a known aberration. We present here the results of a simulation campaign aimed at assessing the validity of this approach for sensing non-common path aberrations (NCPA) in SHARK-NIR, the new-generation high-contrast imager for the Large Binocular Telescope (LBT). The aberrations to be retrieved has been modeled on a realistic error budget of the instrument, while images are generated with an end-to-end Fresnel simulator which makes use of atmospheric phase screens to simulate realistic closed-loop observations. A wide parameter space is explored in order to identify the critical parameters and to estimate the expected level of correction.

An important tool for the development of the next generation of extremely large telescopes (ELTs) is a robust Systems Engineering (SE) methodology. GMACS is a first-generation multi-object spectrograph that will work at visible wavelengths on the Giant Magellan Telescope (GMT). In this paper, we discuss the application of SE to the design of next-generation instruments for ground-based astronomy and present the ongoing development of SE products for the GMACS spectrograph, currently in its Conceptual Design phase. SE provides the means to assist in the management of complex projects, and in the case of GMACS, to ensure its operational success, maximizing the scientific potential of GMT.

Remote operation of observatories has been a topic of interest for many years. This paper discusses a general approach to determining what it will take to transition from on-site summit nighttime operation to remote nighttime operation of a facility. It is informed by involvement in projects at Canada-France-Hawaii Telescope, Gemini Observatory, and W. M. Keck Observatory. While these projects had differences, they all shared the goals of upgrading an operating observatory that is on sky every night to improve efficiency of operations without negative impact on science. The approach combines project management (PMI) and systems engineering (INCOSE) methodologies and tools to develop an understanding of the impact on operations, determine scope and requirements for new capabilities as well as additional functionality for existing systems, identify and manage risks, and how to incrementally move toward remote operation by integrating changes into current operations along the way.

The Sunrise observatory consists of a one-meter solar telescope operated in the gondola of a stratospheric balloon. The first two science flights of Sunrise have shown the unreached imaging quality at lower costs than satellitebased missions, as well as a general problem of balloon missions: Micro-vibrations have occurred during parts of the observation time and made the determination of the point spread function difficult. This paper introduces an adaption of deconvolution from wave-front sensing (DWFS) as a possible solution. The case of vibrations in the common path is verified in simulations. The utilization of high-cadence spectro-polarimeters is approached in order to extend DWFS to non-common path errors at the scientific camera.

The Gregor At Night Spectrograph (GANS) is a new instrument currently being built for the GREGOR solar telescope at Iza~na observatory on Tenerife. Its primary science case will be the follow up of planetary candidates detected by upcoming surveys focussing on bright targets (TESS, PLATO2). Therefore it will be optimised for precise radial velocity determination and long term stability. We have developed a ZEMAX based software package to create simulated spectra, which are reduced using standard IRAF tasks. We used a solar model spectrum to determine the influence of S/N ratio, wavelength coverage, pixel sampling and telluric lines on the extracted radial velocities. Furthermore we derived the effect of an asymmetric spectrograph illumination on the measured radial velocity.

The ESA space mission Euclid is designed to map the geometry of the dark Universe and will be equipped with two instruments on-board. The VIS instrument [1] is composed of different subsystems including the Power and Mechanism Control Unit (PMCU). The PMCU is developed and manufactured in France under responsibility of the CEA (Commissariat à l'Énergie Atomique et aux Énergies Alternatives) with the support of CNES (Centre National d’Etudes Spatiales) . It controls VIS subsystems located in the cold PayLoad Module (PLM) which are the readout shutter, the calibration unit and the Focal Plane Array (FPA) thermistors. We will describe the integration of the PMCU, the philosophy of its tests and results obtained to qualify the unit up to the Flight Model in preparation to its delivery foreseen in Autumn 2018.

The Wide Field Optical Spectrograph (WFOS) is one of the first-light instruments of Thirty Meter Telescope. It is a medium resolution, multi object, wide field optical spectrograph. Since 2005 the conceptual design of the instrument has focused on a slit-mask based, grating exchange design that will be mounted at the Nasmyth focus of TMT. Based on the experience with ESI, MOSFIRE and DEIMOS for Keck we know flexure related image motion will be a major problem with such a spectrograph and a compensation system is required to mitigate these effects.

We have developed a flexure Compensation and Simulation (FCS) tool for TMT-WFOS that provides an interface to accurately simulate the effects of instrument flexure at the WFOS detector plane (e.g image shifts) using perturbation of key optical elements and also derive corrective motions to compensate the image shifts caused by instrument flexure. We are currently using the tool to do mote-carlo simulations to validate the optical design of a slit-mask concept we call Xchange-WFOS, and to optimize the flexure compensation strategy. We intend to use the tool later in the design process to predict the actual flexure by replacing the randomized inputs with the signed displacement and rotations of each element predicted by global FEA model on the instrument..

There have been considerations for the development of some substantial optical telescope facilities within the United Arab Emirates (UAE) since some time. In early 2016 a project to identify two promising locations and collect on-site measurements at those locations for at least one annual cycle. These two locations were chosen primarily for their altitude and overall availability. During the second half of 2016 weather stations were deployed on both sites. One site was selected as reference site and here also some additional systems were deployed, however, not covering an annual cycle. In this paper these very preliminary measurements are being presented, and the conditions - primarily meteorological - in the Eastern mountainous region of the UAE is discussed.

Fibre fed spectroscopy requires that the output distribution of the optical fibre is as stable as possible. Effects like scrambling and FRD play an important role in any fibre fed instrument design, since they affect directly the output distribution of multi-mode fibres. These effects depend, among other factors, on the excited propagation modes. The propagation modes of different fibre geometries have different spatial distributions, therefore could show different scrambling and FRD characteristics. A model is being developed at the Leibniz-Institute for Astrophysics Potsdam (AIP) that shows the intrinsic effect of scrambling and FRD in optical fibres. The model is based on the Eigenmode Expansion Method (EEM). With this theoretical frame work should be possible to compare the results of mode excitation in different fibre geometries. This work is part of a PhD Thesis involved in the fibre system of MOSAIC, a multi-object spectrograph for the E-ELT.

The Giant Magellan Telescope (GMT) will be one of the most powerful ground-based telescopes in the world upon commissioning at the Las Campanas Observatory (LCO) in Chile. The GMT enclosure protects the telescope, and its systems, from the external environment and plays a crucial role in delivering high quality celestial imagery. This paper describes the development and application of a 3D finite element model of the GMT enclosure and key internal components. This model was developed by Boeing Research & Technology (BR&T), under contract from the Giant Magellan Telescope Organization (GMTO), to characterize the complex interplay of convective, radiative and conductive heat transfer between components within the GMT enclosure and the surrounding environment. The primary intent of this analysis tool is to provide GMT engineers with input conditions for detailed conjugate heat transfer and aero-optic simulations. These simulations will support GMTO’s optimization of their enclosure design to maximize image quality and daily imaging time with minimal use of active thermal controls.

The Observatorio del Roque de Los Muchachos (ORM) on the Canary island of La Palma has been selected as the alternate site for the Thirty Meter Telescope (TMT). Several potential locations on the summit needed to be investigated in terms of Ground Layer (GL) strength. Moreover, the presence of existing observatories necessitated a study of the interaction between these observatories and TMT. Lack of localized site testing and the nature of the terrain led to the use of Computational Fluid Dynamics (CFD) simulations combined with a seeing model for GL optical turbulence estimates. Three candidate locations for TMT at ORM were investigated under certain wind directions using steady-state simulations. For the most likely candidate the influence of TMT on two nearby telescopes, Gran Telescopio Canarias (GTC) and Telescopio Nationale Galileo (TNG), and vice-versa was also explored and conclusions were drawn.