Abstract

The MEG experiment, designed to search for the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay, completed data-taking in 2013 reaching a sensitivity level of \({5.3\times 10^{-13}}\) for the branching ratio. In order to increase the sensitivity reach of the experiment by an order of magnitude to the level of \(6\times 10^{-14}\), a total upgrade, involving substantial changes to the experiment, has been undertaken, known as MEG II. We present both the motivation for the upgrade and a detailed overview of the design of the experiment and of the expected detector performance.

Deceased: B. I. Khazin, A. Korenchenko, G. Piredda.

1 Introduction

1.1 Status of the MEG experiment in the framework of charged Lepton Flavour Violation (cLFV) searches

The experimental upper limits established in searching for cLFV processes with muons, including the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay, are shown in Fig. 1 versus the year of the result publication. Historically, the negative results of these experiments led to the empirical inclusion of lepton flavor conservation in the Standard Model (SM) of elementary particle physics. During the past 35 years the experimental sensitivity to the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay has improved by almost three orders of magnitude, mainly due to improvements in detector and beam technologies. In particular, ‘surface’ muon beams (i.e. beams of muons originating from stopped \(\pi ^+\)s decay in the surface layers of the pion production target) with virtually monochromatic momenta of \({\sim 29}\,{\hbox {MeV}/c}\), offer the highest muon stop densities obtainable at present in low-mass targets, allowing ultimate resolution in positron momentum and emission angle and suppressing the photon background production. The current most stringent limit is given by the MEG experiment [1] at the Paul Scherrer Institute (PSI, Switzerland) on the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay branching ratio [2]:

at 90% confidence level (CL), based on the full data-set. Currently, the upgrade of the experiment, known as the MEG II experiment, is in preparation aiming for a sensitivity enhancement of one order of magnitude compared to the MEG final result.

The signal of the two-body \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay at rest can be distinguished from the background by measuring the photon energy \(E_{\mathrm {\gamma }}\), the positron momentum \(p_{\mathrm {e}^{+}}\), their relative angle \(\varTheta _{\mathrm {e}^+ \gamma }\) and timing \(t_{\mathrm {e}^+ \gamma }\) with the best possible resolutions.

The background comes either from radiative muon decays (RMD) \({\mu ^+ \rightarrow \hbox {e}^+ \nu \bar{\nu }\gamma }\) in which the neutrinos carry away a small amount of energy or from an accidental coincidence of an energetic positron from Michel decay \(\mu ^+ \rightarrow \mathrm {e}^+ \nu \bar{\nu }\) with a photon coming from RMD, bremsstrahlung or positron annihilation-in-flight (AIF) \({\hbox {e}^+ \hbox {e}^- \rightarrow \gamma \gamma }\). In experiments using high intensity beams, such as MEG, this latter background is dominant.

The keys for \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) search experiments achieving high sensitivities can be summarised as

1.

A high intensity continuous surface muon beam to gain the data statistics with minimising the accidental background rate.

2.

A low-mass positron detector with high rate capability to deal with the abundant positrons from muon decays.

3.

A high-resolution photon detector, especially in the energy measurement, to suppress the high-energy random photon background.

The MEG experiment uses one of the world’s most intense continuous surface muon beams, with maximum rate higher than \(10^{8}\,\upmu ^{+}\)/s but, for reasons explained in the following, the stopping intensity is limited to \(3\times 10^{7}\,\upmu ^{+}\)/s. The muons are stopped in a thin polyethylene target, placed at the centre of the experimental set-up which includes a positron spectrometer and a photon detector, as shown schematically in Fig. 2.

The positron spectrometer consists of a set of drift chambers and scintillating timing counters located inside a superconducting solenoid COBRA (COnstant Bending RAdius) with a gradient magnetic field along the beam axis, ranging from 1.27 T at the centre to 0.49 T at either end, that guarantees a bending radius of positrons weakly dependent on the polar angle. The gradient field is also designed to remove quickly spiralling positrons sweeping them outside the spectrometer to reduce the track density inside the tracking volume.

The photon detector, located outside of the solenoid, is a homogeneous volume of liquid xenon (LXe) viewed by photomultiplier tubes (PMTs) submerged in the liquid, that read the scintillating light from the LXe. The spectrometer measures the positron momentum vector and timing, while the LXe photon detector measures the photon energy as well as the position and time of its interaction in LXe. The photon direction is measured connecting the interaction vertex in the LXe photon detector with the positron vertex in the target obtained by extrapolating the positron track. All the signals are individually digitised by in-house designed waveform digitisers (DRS4) [3].

The number of expected signal events for a given branching ratio \( \mathcal{B} \) is related to the rate of stopping muons \(R_\mathrm {\mu ^+}\), the measurement time T, the solid angle \(\varOmega \) subtended by the photon and positron detectors, the efficiencies of these detectors (\(\epsilon _\mathrm {\gamma }, \epsilon _{\mathrm {e}^{+}}\)) and the efficiency of the selection criteria \(\epsilon _\mathrm {s}\)1:

The single event sensitivity (SES) is defined as the \( \mathcal{B} \) for which the experiment would see one event. In principle the lowest SES, and therefore the largest possible \(R_\mathrm {\mu ^+}\), is desirable in order to be sensitive to the lowest possible \( \mathcal{B} \). The number of accidental coincidences \(N_\mathrm {acc}\), for given selection criteria, depends on the experimental resolutions (indicated as \(\varDelta \)) with which the four relevant quantities (\(E_{\mathrm {\gamma }}\), \(p_{\mathrm {e}^{+}}\), \(\varTheta _{\mathrm {e}^+ \gamma }\), \(t_{\mathrm {e}^+ \gamma }\)) are measured. By integrating the RMD photon and Michel positron spectra over respectively the photon energy and positron momentum resolution intervals, it can be shown that:

The number of RMD background events \(N_\mathrm {RMD}\) can be calculated by integrating the SM calculation of the RMD differential branching ratio [4] over the appropriate kinematic intervals, but there is no simple equation for \(N_\mathrm {RMD}\). In MEG, \(N_\mathrm {RMD}\) was more than ten times smaller than \(N_\mathrm {acc}\) [2]. Due to the dependence \(N_\mathrm {acc} \propto R_\mathrm {\mu ^+}^2\), in comparison with \(N_\mathrm {RMD} \propto R_\mathrm {\mu ^+}\), the accidental coincidences in MEG II, where \(R_\mathrm {\mu ^+}\) is about twice as large as in MEG, will dominate even more over the number of background events from RMD.

It is clear from Eqs. (1) and (2) that, for fixed experimental resolutions, the muon stopping rate cannot be increased arbitrarily but must be chosen in order to keep a reasonable signal to background ratio.

After the five-year data taking of MEG, only a limited gain in sensitivity could be achieved with further statistics due to the background (accidental) extending into the signal region. Therefore, the data-taking ceased in 2013, allowing the upgrade program to proceed with full impetus.

In the \({\mu ^-N \rightarrow \hbox {e}^-N}\) conversion experiments, negative muons are stopped in a thin target and form muonic atoms. The conversion of the muon into an electron in the field of the nucleus results in the emission of a monochromatic electron of momentum \({\sim }100\) MeV/c, depending on the target nucleus used. Here the backgrounds to be rejected are totally different from the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) case. The dominant background contributions are muon decay-in-orbit and those correlated with the presence of beam impurities, such as pions. In order to reduce these backgrounds the experiments planned at Fermilab (Mu2e) [5, 6] and J-PARC (COMET [7, 8] and DeeMe [9]) will use pulsed proton beams to produce their muons. Since muonic atoms have lifetimes ranging from hundreds of nanoseconds up to the free muon lifetime at low Z, the conversion electrons are therefore searched for in the intrabunch intervals.

The COMET collaboration plans to start the first phase of the experiment in 2018 with a sensitivity reach better than \(10^{-14}\), to be compared with the existing limit \(7\times 10^{-13}\) [10], followed by the second phase aiming for a goal sensitivity of \(7\times 10^{-17}\), while the Mu2e experiment is foreseen to start in 2021 with a first phase sensitivity goal of \(7\times 10^{-17}\). These experiments can in principle reach sensitivities below \(10^{-17}\) [11, 12].

The \({\mu \rightarrow \hbox {3e}}\) decay search is being pursued in a new experiment, proposed at PSI: Mu3e [13]. This plans a staged approach to reach its target a sensitivity of \(10^{-16}\), to be compared with the existing limit \(1\times 10^{-12}\) [14]. The initial stage involves sharing part of the MEG beam line and seeks a three orders-of-magnitude increase in sensitivity over the current limit, its goal being \(10^{-15}\). The final stage foresees muon stopping rates of the order of e9 \(\upmu ^{+}\)/s.

\(\tau \rightarrow \ell \gamma \) and \(\tau \rightarrow 3\ell \) will be explored by the Belle II experiment at SuperKEKB [15, 16] and a proposed experiment at the super Charm-Tau factory [17, 18] where sensitivities of the order of \(10^{-9}\) to the branching ratios for these channels are expected.

A comparison between the sensitivity planned for MEG II and that envisaged for the other above mentioned cLFV processes will be discussed in the next section after a very short introduction to cLFV predictions in theories beyond the SM.

1.2 Scientific merits of the MEG II experiment

Although the SM has proved to be extremely successful in explaining a wide variety of phenomena in the energy scale from sub-eV to \(O({1}\,{\hbox {TeV}})\), it is widely considered a low energy approximation of a more general theory. One of the attractive candidates for such theory is the grand-unified theory (GUT) [19] which unifies all the SM gauge groups into a single group as well as quarks and leptons into common multiplets of the group. In particular, the supersymmetric version (SUSY-GUT) has received a great amount of attention after the LEP experiments showed that a proper unification of the forces can be achieved at around a scale \(M_\mathrm {GUT}{\sim } 10^{16}\hbox { GeV}\) if SUSY particles exist at a scale \(\mathcal{O}({1}\,\hbox {Tev})\) [20]. The search for TeV-scale SUSY particles has been one of the goals of the LHC program. Results so far have been negative for masses up to 1–2 Tev [21, 22].

The experimentally measured phenomenon of neutrino oscillations [23, 24, 25] requires an extension of the SM. It demonstrates that lepton flavour is violated, and neutrinos have masses but they are orders of magnitude smaller than those of quarks and charged leptons. An appealing extension of the SM consists in introducing Majorana masses for neutrinos to naturally account for the tiny neutrino masses via the seesaw mechanism [26, 27, 28, 29]. This approach predicts the existence of heavy right-handed Majorana neutrinos2 in the range of \(10^{9}\)–\(10^{15}\) GeV. This ultra-high mass scale may be indicative of their connection to SUSY-GUT (e.g. all the SM fermions plus the right-handed neutrino in a generation can fit into a single multiplet in SO(10) GUT). The Majorana neutrinos violate the lepton number, and may account for the matter–antimatter asymmetry in the Universe [31].

It is generally difficult to detect, even indirectly, the effects of such ultra-high energy scale physics. However, the situation changes with SUSY, and cLFV signals provide a general test of SUSY-GUT and SUSY-seesaw as discussed below.

It is well known that cLFV is sensitive to SUSY [32, 33, 34]; in fact the parameter space for the minimal SUSY extension of the SM (MSSM) has largely been constrained by flavour- and CP-violation processes involving charged leptons and quarks [35, 36, 37, 38]. These experimental observations lead to considering special mechanisms of SUSY breaking, requiring e.g. the universal condition of SUSY particles’ masses at some high scale. It was however shown that mixing in sleptons emerges unavoidably at low energy in SUSY-GUT [39] and SUSY-seesaw [40] models even if the lepton flavour is conserved at high scale. This is because flavour-violation sources, i.e. at least the quark and/or neutrino Yukawa interactions, do exist in the theory and radiatively contribute to the mass-squared matrices of sleptons during the evolution of the renormalisation-group equation.3 As a result, \( \mathcal{B} ({\mu \rightarrow \hbox {e} \gamma })\) is predicted at an observable level \({10^{-11}}\)–\({10^{-14}}\) [42, 43, 44, 45, 46, 47, 48, 49]. This theoretical framework motivated the MEG and MEG II experiment.

In order to appreciate this, we recall that the SM, even introducing massive neutrinos, practically forbids any observable rate of cLFV (\( \mathcal{B} ({\mu \rightarrow \hbox {e} \gamma }) < 10^{-50}\)) [50, 51]. Processes with cLFV are therefore clean channels to look for possible new physics beyond the SM, for which a positive signal would be unambiguous evidence.

Over the last 5 years, two epoch-making developments took place in particle physics: the discovery of Higgs boson [52, 53] and the measurement of the last unknown neutrino mixing angle \(\theta _{13}\) [54, 55, 56, 57]. The mass of Higgs boson at 125 GeV [58], rather light, on one hand supports the SUSY-GUT scenario since it is in the predicted region [59]. On the other hand, it is relatively heavy in MSSM and suggests, together with the null results in the direct searches at LHC, that the SUSY particles would be heavier than expected. This implies that a smaller \( \mathcal{B} ({\mu \rightarrow \hbox {e} \gamma })\) is expected because of the approximate dependence \(\propto 1/M_\mathrm {SUSY}^4\). This might explain why MEG was not able to detect the signal as well as why other flavour observables, particularly \(\mathrm {b} \rightarrow \mathrm {s}\gamma \) [60] and \(B_\mathrm {s} \rightarrow \mu ^+ \mu ^-\) [61], have been measured to be consistent with the SM so far. In contrast, the observed large mixing angle \(\theta _{13} = {{8.3\pm 0.2}}^{\circ }\) [25] suggests higher \( \mathcal{B} ({\mu \rightarrow \hbox {e} \gamma })\) in many physics scenarios such as SUSY-seesaw.

Updated studies of SUSY-GUT/seesaw models taking those recent experimental results into account show that \( \mathcal{B} ({\mu \rightarrow \hbox {e} \gamma })\sim 10^{-13}\)–\(10^{-14}\) is possible up to SUSY particles’ masses around 5–10 TeV [62, 63, 64, 65, 66, 67, 68, 69, 70], well above the region where LHC (including HL-LHC) direct searches can reach. In addition, cLFV searches are sensitive to components which do not strongly interact (e.g. sleptons and electroweakinos in MSSM) and thus are not much constrained by the LHC results. In light of the above considerations, further exploration of the range \( \mathcal{B} ({\mu \rightarrow \hbox {e} \gamma })\sim O(10^{-14})\) in coincidence with the 14-TeV LHC run provides a unique and powerful probe, complementary and synergistic to LHC, to explore new physics.

Comparison between different \(\mu \rightarrow \mathrm {e}\) transition processes can be done model independently by an effective-field-theory approach. Considering new physics, cLFV processes are generated by higher-dimensional operators; the lowest one that directly contributes to \({\mu \rightarrow \hbox {e} \gamma }\) is the following dimension-six (dipole-type) operator,

where \(\langle H \rangle \) is the vacuum expectation value of the Higgs field and \(F^{\mu \nu }\) is the field-strength tensor of photon. This operator also induces \({\mu \rightarrow \hbox {3e}}\) and \({\mu ^-N \rightarrow \hbox {e}^-N}\) via the propagation of a virtual photon. There are several other dimension-six operators which cause the \(\mu \rightarrow \mathrm {e}\) transitions, and their amplitudes to each of the three processes are model-dependent.4

In many models, especially most of SUSY models including the above mentioned SUSY-GUT/seesaw models, the operator (3) dominates the \(\mu \rightarrow \mathrm {e}\) transitions. In such a case, the following relations hold independently of the parameters in the models [4, 87]:

Therefore, a search for \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) with a sensitivity of \({\sim } 6\times 10^{-14}\), which is the target of MEG II, with a much shorter timescale and a far lower budget than other future projects, is competitive not only with the second phase of the Mu3e experiment [13] but also with the COMET [7] and Mu2e [6] experiments. On the other hand, in case of discovery, we can benefit from a synergistic effect by the results from these experiments, providing a strong model-discriminant power; any observations of discrepancy from the relations (4) and (5) would suggest the existence of the contributions from operators other than (3).

We finally note that MEG II will represent the best effort to address the search of the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) rare decay with the available detector technology coupled with the most intense continuous muon beam in the world. Experience shows that to achieve any significant improvement in this field several years are required (more than one decade was necessary to pass from MEGA to MEG) and therefore we feel committed to push the sensitivity of the search to the ultimate limits.

1.3 Overview of the MEG II experiment

The MEG II experiment plans to continue the search for the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay, aiming for a sensitivity enhancement of one order of magnitude compared to the final MEG result, i.e. down to \(6 \times 10^{-14}\) for \( \mathcal{B} ({\mu ^+ \rightarrow \hbox {e}^+ \gamma })\). Our proposal for upgrading MEG [88] was approved by the PSI research committee in 2013 and then, the details of the technical design has been fixed after intensive R&D and is reported in this paper.

The basic idea of the MEG II experiment is to achieve the highest possible sensitivity by making maximum use of the available muon intensity at PSI with the basic principle of the MEG experiment but with improved detectors. A schematic view of MEG II is shown in Fig. 3.

A beam of surface \(\mathrm {\mu ^+}\) is extracted from the \(\pi \)E5 channel of the PSI high-intensity proton accelerator complex, as in MEG, but the intensity is increased to the maximum. After the MEG beam transport system, the muons are stopped in a target, which is thinner than the MEG one to reduce both multiple Coulomb scattering of the emitted positrons and photon background generated by them. The stopping rate becomes \(R_\mathrm {\mu ^+}= 7\times 10^{7}\hbox { s}^{-1}\), more than twice that of MEG (see Sect. 2).

The positron spectrometer uses the gradient magnetic field to sweep away the low-momentum \({\mathrm {e}^{+}}\). The COBRA magnet is retained from MEG, while the positron detectors inside are replaced by new ones. Positron tracks are measured by a newly designed single-volume cylindrical drift chamber (CDCH) able to sustain the required high rate. The resolution for the \({\mathrm {e}^{+}}\) momentum vector is improved with more hits per track by the high density of drift cells (see Sect. 4). The positron time is measured with improved accuracy by a new pixelated timing counter (pTC) based on scintillator tiles read out by SiPMs (see Sect. 5). The new design of the spectrometer increases the signal acceptance by more than a factor 2 due to the reduction of inactive materials between CDCH and pTC.

The photon energy, interaction point position and time are measured by an upgraded LXe photon detector. The energy and position resolutions are improved with a more uniform collection of scintillation light achieved by replacing the PMTs on the photon entrance face with new vacuum-ultraviolet (VUV) sensitive 12 \(\times \) 12 \(\hbox {mm}^{2}\) SiPMs (see Sect. 6).

A novel device for an active background suppression is newly introduced: the Radiative Decay Counter (RDC) which employs plastic scintillators for timing and scintillating crystals for energy measurement in order to identify low-momentum \({\mathrm {e}^{+}}\) associated to high-energy RMD photons (see Sect. 7).

The trigger and data-acquisition system (TDAQ) is also upgraded to meet the stringent requirements of an increased number of read-out channels and to cope with the required bandwidth by integrating the various functions of analogue signal processing, biasing for SiPMs, high-speed waveform digitisation, and trigger capability into one condensed unit (see Sect. 8).

In rare decay searches the capability of improving the experimental sensitivity depends on the use of intense beams and high performance detectors, accurately calibrated and monitored. This is the only way to ensure that the beam characteristics and the detector performances are reached and maintained over the experiment lifetime. To that purpose several complementary approaches have been developed with some of the methods requiring dedicated beams and/or auxiliary detectors. Many of them have been introduced and commissioned in MEG and will be inherited by MEG II with some modifications to match the upgrade. In addition new methods are introduced to meet the increased complexity of the new experiment.

large momentum-byte \(\varDelta p_\mathrm {\mu ^+}/p_\mathrm {\mu ^+}\sim 7\%\) (FWHM) with an achromatic final focus, yielding an almost monochromatic beam with a high stop density for a thin target,

minimal and well separated beam-correlated backgrounds such as positrons from Michel decay or \(\pi ^0\)-decay in the production target or decay particles from along the beam line and

minimisation of material budget along the beam line to suppress multiple Coulomb scattering and photon production, use of vacuum or helium environments as far as possible.

Coupling the MEG COBRA spectrometer and LXe photon detector to the \(\pi \)E5 channel, which ends with the last dipole magnet ASC41 in the shielding wall, is achieved with a Wien-filter (cross-field separator) and two sets of quadrupole triplet magnets, as shown in Fig. 4. These front-elements of the MEG beam line allow a maximal transmission optics through the separator, followed by an achromatic focus at the intermediate collimator system. Here an optimal separation quality between surface muons and the eight-fold higher beam positron contamination from Michel positrons or positrons derived from \(\pi ^0\)-decay in the target and having the correct momentum, can be achieved (see Fig. 5) [1]. The muon range-momentum adjustment is made at the centre of the superconducting beam transport solenoid BTS where a Mylar® degrader system is placed at the central focus to minimise multiple Coulomb scattering. The degrader thickness of \(300\,\upmu \text {m}\) takes into account the remaining material budget of the vacuum window at the entrance to the COBRA magnet and the helium atmosphere inside, so adjusting the residual range of the muons to stop at the centre of a \(205\,\upmu \text {m}\) thick polyethylene target placed at 20.5\(^{\circ }\) to the axis.

Measurement of the separation quality with the Wien-filter during the 2015 Pre-Engineering Run

The residual polarisation of the initially 100% polarised muons at production has been estimated by considering depolarising effect at production, during propagation and due to moderation in the stopping target. The net polarisation is seen in the asymmetry of the angular distribution of decay Michel positrons from the target. The estimate is consistent with measurements made using Michel positrons at the centre of the COBRA spectrometer [89], where the energy-dependent angular distributions were analysed. A high residual polarisation of \(P_{\mu ^+} = -0.86\pm 0.02~\mathrm {(stat.)} + 0.06 - 0.05~\mathrm {(syst.)}\) was found, with the single largest depolarising contribution coming from the cloud muon content of the beam. These are muons derived from pion decay-in-flight in and around the target and inherently have a low polarisation due to the widely differing acceptance kinematics. The cloud muon content in the \(28\) MeV/c surface muon beam was derived from measurements where the muon momentum spectrum was fitted with a constant cloud muon content over the limited region of the kinematic edge of the spectrum at \(29.79\) MeV/c. This was cross-checked against measurements at \(28\) MeV/c using a negative muon beam. In this case, there are no such surface muons (due to the formation of pionic atoms on stopping) and hence a clear cloud muon signal can be measured. When comparing the cross-sections and the kinematics of pions of both charge signs consistency is found, with a ratio of \(\sim 1.2\)% of negative cloud muons to surface muons at \(28\) MeV/c. This situation is not expected to change significantly for MEG II, apart from the slightly higher divergences expected due to the increased \(\varDelta p_\mathrm {\mu ^+}/p_\mathrm {\mu ^+}\) and a possible difference in the polarisation quenching properties of the target material in a magnetic field [90], which is still under investigation.

2.2 Upgrade concept

The increased sensitivity sought in MEG II will partially be realised by the full exploitation of the available beam intensity and partially by the increased detector performances, allowing the most significant contribution to the background from overlapping accidental events, to be managed, at the level of an order of magnitude higher sensitivity for the experiment. As outlined in Sect. 1.1 the accidental background has a quadratic dependence on the muon beam stopping rate, whereas the signal is directly proportional to the stopping rate. This puts stringent limits on the material budget and the suppression of beam-correlated backgrounds in the beam line, while having to allow for the flexibility and versatility of different beam modes required for calibration purposes. The three main modes required are:

For MEG II, the beam line components and optics will stay the same as for MEG, apart from the introduction of extra beam monitoring tools (cf. Sect. 2.3.2). However, the increased muon rate for MEG II, while maintaining the high transmission optics, can only be achieved by an increase in the momentum-byte \(\varDelta p_\mathrm {\mu ^+}/p_\mathrm {\mu ^+}\) i.e. by means of opening the \(\pi \)E5 channel momentum slits to their full extent. An increased \(\varDelta p_\mathrm {\mu ^+}\) however, implies an increased range straggling of the beam. A study undertaken for the MEG II upgrade proposal [88] looked at various beam/target scenarios comparing the use of a surface muon beam of \(28\) MeV/c (mean range \(\sim 125\) mg cm\(^{-2}\)) to that of a sub-surface beam of \(25\) MeV/c (mean range \(\sim \) 85 mg cm\(^{-2}\)). As the name implies, these are muons with a unique momentum of \(29.79\) MeV/c from stopped pion decay, which are selected from deeper within the target and lose some of their energy on exiting.

The potential advantage of such a sub-surface beam is then the reduced range straggling which is comprised of two components (cf. Eq. (6)). The first factor from energy-loss straggling of the intervening material, which at these momenta amounts to about 9% (FWHM) of the range [91] and the second from the momentum-byte \(\varDelta p_\mathrm {\mu ^+}/p_\mathrm {\mu ^+}\). However, the range and the straggling vary most strongly with momentum, being proportional to \(a\times p^{3.5}\), where ‘a’ is a material constant,

Shows the \(\pi \)E5 measured momentum spectrum with full momentum-byte. The red curve is a fit to the data with a \(p^{3.5}\) power law, folded with a Gaussian momentum resolution corresponding to the momentum byte as well as a constant cloud muon contribution

A momentum change has a direct impact on the target thickness, which is a balance between maximising the stop density and minimising the multiple Coulomb scattering of the out-going Michel positrons and the photon background produced in the target. Furthermore, the surface muon rate also decreases with \(p^{3.5}\) and therefore ultimately limits how low one can go down in momentum. This behaviour is shown in Fig. 6, where the measured muon momentum spectrum is fitted with a \(p^{3.5}\) power-law, folded with a Gaussian momentum resolution equivalent to the momentum-byte, plus a constant cloud muon content. The blue and the red (truncated) boxes show the \(\pm 3\sigma _{p_\mathrm {\mu ^+}}\) momentum acceptance for the surface/sub-surface beams, corresponding respectively to (\({\pm 2.7}/{\pm 2.5}\)) MeV/c. The optimal momentum yielding the highest intensity within the full momentum-byte is centred around \(28.5\) MeV/c. For each data-point the whole beam line must be optimised. The upgrade study [88] investigated various combinations of beam momentum and target parameters such as thickness which varied between 100–250 \({\upmu \hbox {m}}\) and orientation angle varying between \({15.0}^{\circ }\)–\({20.5}^{\circ }\). This resulted in only one really viable solution that could yield the required muon stopping intensity of 7e7 \(\upmu ^{+}\)/s suitable for achieving the goal sensitivity within a measuring period of \({\sim 3}\) years: a surface muon beam of \(28\) MeV/c with a polyethylene target of \(140\,\upmu \text {m}\) thickness, placed at an angle of 15.0\(^{\circ }\) to the axis.

A sub-surface beam solution was only able to meet the criteria by scaling-up the target thickness to \(160\,\upmu \text {m}\), which negated the principle. Hence the baseline solution chosen for MEG II was the surface muon beam solution due to the thinner target and higher achievable rate as well as its beneficial impact on the resolutions and background.

2.3 Beam monitoring

Two new detectors have been developed to measure the beam profile and rate: the sampling scintillating fibre beam monitoring (sampling SciFi) mounted at the entrance to the spectrometer and the luminophore foil detector (CsI on a Mylar support) coupled with a CCD camera installed at the intermediate focus collimator system.

2.3.1 The sampling SciFi beam monitoring detector

This detector is a quasi non-invasive, high rate sustainable beam monitoring tool, able to provide beam rate, profile measurements and particle identification in real time. It is based on scintillating fibres (SciFi) coupled to SiPMs; the usage of SiPMs allows for a detector able to work in high magnetic fields.

It consists of a grid of two orthogonal fibre layers: one with the fibres running along the x-axis and the other with the fibres along the y-axis. The detector is expected to be located at the end of the vacuum beam line, just in front of the spectrometer. A movable configuration allows the remote removal/insertion of the detector into the beam.

Figure 7 shows the built and tested full scale prototype. We used Saint-Gobain BCF-12, 250 \(\times \) 250 \(\upmu \hbox {m}^{2}\) double-cladding fibres [92], each one independently coupled at both ends to S13360-1350CS SiPMs from Hamamatsu Photonics [93] (with an active area of 1.3 \(\times \) 1.3 \(\hbox {mm}^{2}\) and a pixel size of 50 \(\times \) 50 \(\upmu \hbox {m}^{2}\)). The relative distance between adjacent fibres mounted in the same layer is equal to 4.75 mm, a pitch which satisfies the requirements for a precise measurement of the beam profile and rate. Furthermore a large detector transparency \(T > 92\%\) (where \(1-T =\) particles hitting the fibres/total incident particles) is achieved with a relatively small number of channels (\(\approx 100\)). In fact for this prototype we mounted 21 fibres per layer giving a total number of 84 channels. The signals are sent to the TDAQ prototype (see Sect. 8) that includes also the preamplifiers (with adjustable gain up to 100, which is what we used here) and the power supplies for the SiPMs (operated at \(\approx \) 55.6 V). The trigger used for the beam profile and rate measurements is the “OR” of all the “AND”s of the SiPMs coupled to the same fibre, with a common threshold for all channels \({\ge 0.5}\) photoelectrons.

Positive muon beam profile and rate as measured along the \(\pi \)E5 beam line

Figure 8 shows the beam profile as measured with the detector mounted along the \(\pi \)E5 beam line. The incident particles are positive muons with an initial momentum of \(28\) MeV/c, after having left the \(190\,\upmu \text {m}\) Mylar window at the end of the vacuum beam line and travelling some 15 cm in air before traversing the \(25\,\upmu \text {m}\) of Tedlar\(^{\textregistered }\) used as a light tight shield. The corresponding total rate and beam profiles were \(R_\mathrm {\mu ^+}(\mathrm {at}~I_{\mathrm {p}}= {2.2}\,{\hbox {mA}}) = (1.11\pm 0.011)\times 10^{8}{\mu ^{+}/{s}}\) and \((\sigma _x,\sigma _y)= (18.1 \pm 0.1, 17.8 \pm 0.1)\hbox { mm}\), respectively. These measured numbers are consistent to within 5% or better with those provided by our “standard” beam monitoring tools (methods based on a 2D x–y scanner using a large depletion layer APD or a pill scintillator coupled to a miniature PMT). One of the most attractive features of this detector is its capability of providing the full beam characterisation in just tens of seconds with all the associated benefits such as faster beam tuning, real time feedback about a malfunctioning of the beam/apparatus, reduced systematic uncertainties etc..

Figure 9 shows the detected charge associated withpositrons of \(28\) MeV/c and stopping muons in the fibres. A clear separation between the positrons (which are minimum ionising particles m.i.p.) and the low energy muons can be seen.

Figure 10, finally, shows the capability of the detector to distinguish between high momentum particles (\(p=115\,\text {MeV}/\mathrm{c}\)) by plotting the measured charge associated to them versus their time-of-flight (the radio frequency of the main accelerator is used as a time reference). From left to right we have positrons, pions and muons.

Scatter plot of the measured charge versus the time difference between the arrival time of the particles (with momentum \(p=115\,\text {MeV}/\mathrm{c}\)) and the radio frequency of the main accelerator. From left to right we have positrons, pions and muons

2.3.2 An ultra-thin CsI(Tl) luminophore foil beam monitor

A new in-situ, high rate and non-destructive beam monitoring system based on a thin CsI(Tl) scintillation foil (luminophore) and a CCD camera system has been developed for MEG II. Initial tests as an external device able to measure both the beam intensity as well as giving a quantitative measure of the beam spot size have led to a permanent installation incorporated into the beam line vacuum at the MEG intermediate focus collimator system.

The advantages of such a system over the standard MEG pill-scintillator 2D x–y scanner system are four-fold: in-situ, non-destructive measurement of the beam characteristics, no dismantling of beam line components necessary, as in the case of the pill-scintillator scanner system; in vacuum measurement, no corrections needed for multiple Coulomb scattering in the vacuum window or air; comparatively fast measurement, multiple exposures each of 10–100 s compared with a pill-scintillator 2D “cross-scan” of 10 min or a 2D “raster-scan” of 90 min; continuous monitoring possible allowing online centring in the event of beam steering due to changes of the proton beam position on the muon production target E.

2.3.2.1 CsI(Tl) foils and CCD camera system CsI(Tl) is a well known and common inorganic scintillator with a relatively high light yield at more than \({5\times 10^{4}}\) ph/MeV of deposited energy. The peak emission of CsI(Tl) is approximately 560 nm and well suited for use in visible light imaging systems such as a CCD. The scintillation light decay constants (\(\sim 1\)\(\upmu \)s) are rather long compared to fast organic scintillators though not problematic for this application due to the much longer exposure times.

Four foils were constructed using a Lavsan (Mylar\(^{\textregistered }\) equivalent) base structure, where a thin layer of CsI(Tl) was applied using chemical vapour deposition. The precise CsI(Tl) layer thickness was varied between \(3.0\,\upmu \text {m}\) and \(5.2\,\upmu \text {m}\), allowing for the comparison and possible optimisation of layer thickness.

The imaging system used was a Hamamatsu ORCA FLASH4.0 camera providing 4.19 megapixels along with 16 bit pixel depth [94]. An internal Peltier cooling device as well as an external water cooling system allow the sensor temperature to be reduced to \(-30^{\circ }\)C and hence significantly reducing the thermal noise. The sensor’s peak quantum efficiency matches well to the CsI(Tl) peak emission near 560 nm.

2.3.2.2 Beam image analysis Beam profile imaging consists of multi-frame (typically 10) exposures each of 10 s length together with an equivalent set of background exposures taken with the beam-blocker closed, enabling stray ambient light and the inherent thermal noise of the sensor to be eliminated on subtraction.

All signal and background images are first summed and averaged and then subtracted to generate a calibrated signal image, from which a central region of interest is selected. This image is then fitted using a 2D correlated Gaussian function to obtain the beam position and widths in x and y as well as their correlations. The summed image intensity is normalised by the total proton current during the exposure period. The current measurement is initiated by a simultaneous external trigger of the proton signal scalar and the camera shutter. A typical image after processing is shown in Fig. 11.

Beam profile signal image after background subtraction, cut to a region of interest, and normalised to the proton current

2.3.2.3 Beam width A comparison of the beam spots as measured by the pill-scintillator to those obtained from x–y projections of the luminophore foil image are shown in Fig. 12 with good agreement within the fit widths. The difference in centroids is due to the difference in alignment between the two setups.

The beam profiles in x and y measured with the pill-scintillator in a and b and projections from the luminophore foil in c and d fitted with Gaussian functions. Emphasis is on the beam widths, as differences in mean positions are attributed to alignment differences in the two setups

The spatial resolution of the luminophore foil system was determined by placing an Al grid just upstream of the foil, while irradiating with the muon beam. The grid edges of the resultant picture image, when fitted with a step-function convoluted with a Gaussian resolution function, yield an upper limit on the combined foil, camera and beam resolution of \(650\,\upmu \text {m}\) which includes beam divergence and range straggling effects, so that the intrinsic spatial resolution of the foil is much smaller.

2.3.2.4. Beam intensity A beam intensity comparison between the luminophore system and the pill-scintillator system was made by symmetrically opening the \(\pi \)E5 FS41L/R slit system in small steps, so scanning the full beam intensity over an order of magnitude. The comparative plot of relative intensity normalised to the proton beam intensity is shown in Fig. 13. Good agreement can be seen at the 5\(\%\) level which can be understood as being due to the difference in technique. The pill-scintillator measurement samples only a 2 mm diameter portion of the beam on the beam-axis, whereas the luminophore samples the entire beam spot which changes in size with slit opening, at the 10\(\%\) level over the entire range.

The muon rate as a function of the beam line slit opening, measured using the pill-scintillator and luminophore foil

2.3.2.5 Beam line setup The initially developed external system has since been incorporated into the beam line vacuum as part of the intermediate focus collimator system shown in Fig. 14. The foil frame is attached to a drive shaft and pulley system that allows the foil to be rotated in and out of the beam while under vacuum. A calibration grid is attached to the surface of the frame to allow for a pixel-to-millimetre conversion. The foil and frame are viewed inside the beam pipe, under vacuum and imaged with the CCD camera via a mirror system and glass window on a side port. The interior of the vacuum pipe can be illuminated with a UV LED to conduct calibration measurements of the foil and CCD system within the light-tight region.

The luminophore foil set-up at the collimator. The imaging is done via a mirror system through a side-port to a CCD camera outside the vacuum pipe

An example of the usefulness of such a system can be seen in Fig. 15, which shows the separation quality between muon and positron beam spots imaged at the collimator system with the luminophore foil. The separation quality has purposely been reduced by adjusting the parameters of the Wien filter in order that both spots can be seen simultaneously on the picture. The use of the luminophore allows a calibration of the spatial separation to be made effectively online.

A pseudo 3D light intensity plot showing the muon beam (small peak) and positron beam (large peak) spots together in one image. This is achieved by reducing the Wien filter (SEP41) separation power through reduced E and B fields

2.3.2.6 Conclusions Thin CsI(Tl) luminophore foils offer fast, in-situ beam monitoring possibilities, with negligible impact on beam rate and emittance. The foils combined with a cooled camera system with sufficient resolution reproduces beam profile and rate measurements conducted with the scanning pill-scintillator. Full 2D beam measurement can be made approximately ten times faster while providing long-term non-destructive beam information. Furthermore, it allows a direct measure of beam parameters without the need for multiple Coulomb scattering corrections due to air or vacuum windows and allows direct feedback on external influences on the beam position or intensity.

3 Target

The basic requirements for a MEG stopping target are six-fold:

a high muon stopping density over a limited axial region centred on the COBRA fiducial volume,

minimisation of multiple Coulomb scattering for the outgoing positrons,

minimisation of photon conversions from RMD in the target,

minimisation of positron AIF or bremsstrahlung with photons entering the detector acceptance,

allow reconstruction of the positron decay vertex and initial direction at the vertex, onto the target plane and

mechanically stable with good planarity and remotely movable for compatibility with calibrations requiring other targets.

Owing to the thinner target, smaller angle for MEG II and the increased \(\varDelta p_\mathrm {\mu ^+}/p_\mathrm {\mu ^+}\), the remaining variable material budget consisting of degrader and COBRA helium environment, must then be matched to give an optimal residual range at the target. Figure 16 shows the simulation results for the optimal stopping efficiency versus degrader thickness for the previous MEG \(205\,\upmu \text {m}\) thick polyethylene target. Two different He-concentrations are shown, from which can be seen that 1% of air is equivalent to \(\sim 10\,\upmu \text {m}\) of Mylar.

For MEG II a separate target study was also undertaken to examine the material possibilities for a target equivalent to the baseline \(140\,\upmu \text {m}\) polyethylene (CH\(_2\)) target, placed at 15.0\(^{\circ }\) to the axis. The resulting set of candidate targets are listed in Table 1 below. Since the material thickness for each target is equivalent in terms of the surface density g \(\hbox {cm}^{-2}\), the residual range and hence the degrader thickness is therefore also the same.

The candidate target parameters for an equivalent thickness to the baseline solution of \(140\,\upmu \text {m}\) polyethylene (CH\(_2\)) target, placed at 15.0\(^{\circ }\) to the axis

Material

Degrader (\(\upmu \hbox {m}\))

Thickness (\(\upmu \hbox {m}\))

Thickness (\(\hbox {X}_0\))

Inclination (deg)

Density (g \(\hbox {cm}^{-3}\))

Stop efficiency (%)

Multiple scattering (mrad)

\(\mathrm {\mu ^+}\)(18 MeV)

\({\mathrm {e}^{+}}\)(52 MeV)

CH\(_2\)

350

140

\({2.8\times 10^{-4}}\)

15.0

0.893

83

52.0

3.0

Be

350

90

\(2.6\times 10^{-4}\)

15.0

1.848

83

49.3

2.9

Mylar

350

100

\(3.5\times 10^{-4}\)

15.0

1.390

84

58.5

3.4

Scint. PVT

350

130

\(3.1\times 10^{-4}\)

15.0

1.032

84

54.5

3.2

Diamond

350

40

\(3.3\times 10^{-4}\)

15.0

3.515

81

56.8

3.3

The main properties affecting tracking and background production, as well as the target stopping efficiency show that there are no dramatic differences between the candidates, with multiple Coulomb scattering estimates varying less than 10% from the average, while the equivalent thickness in radiation lengths varies by about 15% from the average. A separate background study to estimate the number of background photons with energy \(E_{\mathrm {\gamma }}> 48\,\text {MeV}\) produced in the fiducial volume of COBRA per incident muon and entering the LXe photon detector gave values between \({(1.14\pm 0.05)\times 10^{-6}}\,{\gamma /\mu ^{+}}\) for the scintillation target and \({(1.22\pm 0.05)\times 10^{-6}}\,{\gamma /\mu ^{+}}\) for the Mylar target. The equivalent simulated optimised stopping efficiency in the case of the MEG II polyethylene target is shown in Fig. 17.

Table 1 shows that different materials outperform each other in different categories. In general, the beryllium target shows an overall good performance, though from the thickness and size required, as well as from the safety aspects it is not favoured. Diamond, which is mechanically stable and known to be more radiation tolerant has the smallest radiation length, as well as having scintillation properties. However, it is currently not commercially available in the size required for a MEG II target. The scintillation target (BC400B) from Saint-Gobain lies in the mid-range of the performance span, though with the lowest number of accepted background photons per muon of all targets. A very important and added advantage over the other non-scintillating targets is, the possibility of non-destructive beam intensity and profile measurements, using a CCD camera and optical system. This would allow corrections, caused by proton beam shifts on the main pion production target, to be made to the beam centring on the MEG muon target during data-taking. Two prototype targets have so far been implemented for the Pre-Engineering Runs 2015/2016, a polyethylene (PE) and a polyvinyltoluene (PVT) one. The prototype scintillation target (PVT) is seen in Fig. 18.

3.1 Scintillation target prototype

(Left) shows two sides of the prototype PVT target used during the 2016 Pre-Engineering Run. The calibration grid is used for the perspective transformation. The carbon-fibre/Rohacell® foam frame can be seen from the other side. (Right) shows the CCD setup and Mylar mirror at the downstream side (DS) of the COBRA magnet \(\sim \) 2.1 m DS of the target

Figure 18 shows the two sides of the prototype target used in the 2016 Pre-Engineering Run, the downstream CCD viewing side has a calibration grid as part of the frame to ensure a correct perspective transformation of the beam image. The frame is a sandwich of carbon-fibre and Rohacell foam ensuring a lightweight construction and strength, as can be seen from the lower image in Fig. 18 (left). The fiducial size of the scintillator, excluding the frame is 260 \(\times \) 70 \(\hbox {mm}^{2}\).

The bare setup including CCD camera, lens and thin Mylar mirror system placed \({\sim }2.1\) m away from the target, on the downstream-side (DS) of the COBRA magnet is shown in Fig. 18 (right). Analysed background subtracted, perspective corrected and 2D Gaussian fitted beam images (see Fig. 19) show that even with a non-ideal CCD camera (no cooling), and exposures of 100 s in a strong gradient magnetic field of several \(\sim \) T, comparable results, at the sub-millimetre level, to the usual 2D APD “raster scans” performed at the centre of COBRA, can be obtained, in a fraction of the time. Furthermore, it was demonstrated that the beam intensity could be measured over the dynamic range of a factor of 50 and reproduce results measured independently with the “pill scintillator” scanner system as shown in Fig. 20. The measurements were made by adjusting the opening of the FS41L/R momentum slits of the channel, so changing the intensity. Good agreement is seen.

Slit curve comparison measured with the scintillation target (triangles) and 2D pill scintillator scanner system (circles), showing an intensity variation of the muon beam of a factor of \(\sim 15\) for the scintillator scanner and \(\sim 5\) for the scintillator target

Finally, a first radiation damage study was also undertaken during the 2016 run with about \(5.5\times 10^{13}\,\upmu ^{+}\) integrated, corresponding to an integrated dose of \(\sim 30\) kGy (3 Mrad). A loss in light yield was seen, though less than expected [95], which may be understood by the way in which the scintillation light is collected, namely through the very thin scintillator thickness thereby being less sensitive to attenuation. A fit to the data with an exponential decay law gives a decay constant of \(D={2.793\times 0.041\times 10^{14}}\,\upmu ^{+}\) as shown in Fig. 21. Extrapolating this to the longest MEG beam run of 2012 at the MEG II beam intensity as measured above, would lead to a light yield of \({\sim 14}\)% at the end of a 1-year period however, still yielding measurable profiles and intensities as demonstrated above. Normalising UV-LED measurements would be however required for a corrected intensity measurement. Furthermore, this would necessitate a new target for each year. Further radiation tests are envisaged to study the effect on the mechanical properties such as planarity, before a final decision on the target material is taken. A new CCD camera system for imaging the beam on target has now been procured, including cooling and a mechanical shutter which should significantly improve the image quality and the analysis procedure.

Light yield curve for PVT exposed to \(\sim \) 30 kGy (3 Mrad) of integrated dose from the muon beam. An exponential fit to the data is shown with the resulting decay constant D

3.1.1 Target alignment

An important consideration for the target implementation is the accurate knowledge of the target position, in particular the knowledge of the target planarity and its perpendicular distance from its nominal position. Errors in this coordinate introduce a systematic error in the positron direction at the target due to the error in the path length of the curved positron trajectory projected on to the target plane. An offset of 1 mm in the target plane introduces a systematic error in the positron \(\phi \)-angle of 7–12 mrad, comparable to the \(\phi \) angular resolution achieved by MEG [2]. In MEG, this position was monitored by imaging small holes in the target foil. This monitoring was statistics limited in its ability to monitor deformation of the target foil during the run; lack of precise target position and shape information introduced a significant contribution to the systematic uncertainties in the positron angle measurement. With the anticipated improved angular resolution in MEG II, improved monitoring of the target position and shape is required, with a goal of monitoring the target planarity and transverse position to a precision \({<50}\,{\upmu \hbox {m}}\) and the axial position to precision \({< 100}\,{\upmu \hbox {m}}\).

It is envisaged, as in MEG, to implement both an optical survey for the determination of the target position, orientation, and shape and the software alignment method introduced above. The perpendicular distance of the target plane from the origin is determined by imaging the y-positions of a number of holes; there is a deficit of trajectories originating from the position of the holes. Any error in the perpendicular distance of the target from its nominal position results in the hole images varying in a systematic way depending on the value of \(\phi _{\mathrm {e}^{+}}\) (see [1] for a full description of this technique). An example of a reconstructed vertex plot of the target is shown in Fig. 22 corresponding to the 2011 run data. As in MEG, this technique will be statistics limited and not allow continuous monitoring of the target position and planarity.

The optical markings on the scintillator target used to test the photogrammetric monitoring principle

A number of further improvements to the target and its optical imagery are planned and under study:

a distortion-free/distortion minimising target suspension system allowing minimal impact of the target frame on the target foil;

further investigations to understand the origin of the previous MEG target distortion (e.g. radiation damage, brittleness due to dry He-environment);

measurement of the target planarity both before and after exposure using a coordinate measuring machine with a precision better than 50 \({\upmu \hbox {m}}\);

determination of the target frame position in the experiment to a precision of \({{\sim }15}\,{\upmu \hbox {m}}\) using a laser survey technique with low-mass corner-cube reflectors mounted on the target frame;

photogrammetric monitoring of target position, orientation and shape. A series of printed patterns (dots) are optically monitored by CCD cameras viewing the target close to axially. Preliminary studies show a precision of \({{\sim }10}\,{\upmu \hbox {m}}\) in the transverse coordinate (x–y) and \({{\sim }100}\,{\upmu \hbox {m}}\) in the axial coordinate can be achieved. The current scintillator target with its printed pattern is shown in Fig. 23.

4 Cylindrical drift chamber

4.1 Cylindrical drift chamber overview

The MEG II Cylindrical Drift Chamber (CDCH) is a single volume detector, whose design was optimized to satisfy the fundamental requirements of high transparency and low multiple Coulomb scattering contribution for 50 MeV positrons, sustainable occupancy (at \({\sim } 7\times 10^{7}\,\upmu ^{+}\)/s stopped on target) and fast electronics for cluster timing capabilities [96]. Despite the fact that in MEG II the acceptance of the apparatus is dictated by the C-shaped LXe photon detector (see Sect. 6), CDCH has full coverage (\(2\pi \) in \(\phi \)), to avoid non-homogeneous and asymmetric electric fields.

The double readout of the wires with the techniques of charge division and of time propagation difference, together with the ability to implement the cluster counting-timing technique [96], will further improve the longitudinal coordinate measurement.

The stereo configuration of wires gives a hyperbolic profile to the active volume along the z-axis. The single drift cell (see Fig. 25) is approximately square, 6.6 mm (in the innermost layer) to 9.0 mm (in the outermost one) wide, with a \(20\,\upmu \text {m}\) diameter gold plated W sense wire surrounded by \(40\,\upmu \text {m}\) diameter silver plated Al field wires in a ratio of 5:1. For equalising the gains of the innermost and outermost layers, two guard wires layers (\(50\,\upmu \text {m}\) silver-plated Al) have been added at proper radii and at appropriate voltages. The total number of wires amounts to 13 056 for an equivalent radiation length per track turn of about \(1.58\times 10^{-3}\) X\(_{0}\) when the chamber is filled with an ultra-low mass gas mixture of helium and isobutane (C\(_4\)H\(_{10}\)) in the ratio 90:10 (compared with \(2.0\times 10^{-3}\) X\(_{0}\) in the MEG DCH [1]). The drift chamber is built by overlapping along the radius, alternatively, PC Boards (PCB), to which the ends of the wires are soldered, and PEEK®6 spacers, to set the proper cell width, in each of the twelve sectors, between the spokes of the helm shaped end-plate (see Fig. 26). A carbon fibre support structure guarantees the proper wire tension and encloses the gas volume. At the innermost radius, an Al Mylar foil separates the drift chamber gas volume from the helium filled target region.

Prototypes have been built [97] to demonstrate that the design single hit resolution of the chamber (\(\sigma _r\simeq 110\,\upmu \text {m}\)) can be reached and the detector can be operated in the high particle flux environment of MEG II without a significant ageing, as detailed in Sect. 4.7.

4.2 The choice of the filling gas

CDCH uses a helium based gas mixture. The choice of helium is very advantageous, because of its large radiation length (\(\hbox {X}_0 {\sim } 5300\,\hbox {m}\) at STP), which ensures a small contribution in terms of multiple Coulomb scattering, a very important feature in low momentum measurements.

A small amount (\(10\%\)) of isobutane is required as a quencher to avoid self-sustained discharge. Such a percentage is sufficient as it raises the number of primary ionisation pairs to \({\sim }\) 13 \(\hbox {cm}^{-1}\) [98] though lowers the mixture radiation length to \(\hbox {X}_0 {\sim } 1300\,\hbox {m}\). Unfortunately, the use of an organic quencher also results in additional problems after exposure to high radiation fluxes. The recombination of dissociated organic molecules results in the formation of solid or liquid polymers which accumulate on the anodes and cathodes, contributing to the ageing of the chamber.

The fairly constant drift velocity in helium based gas mixtures assures a linear time-distance relation, up to very close distance to the sense wire. On the other hand, the high helium ionisation potential of 24.6 eV is such that a crossing particle produces only a small number of primary electronion pairs in helium based gas mixture. In combination with the small size of the drift cells, it enhances the contribution to the spatial resolutions coming from the statistical fluctuation of the primary ionisation along the track, if only the first arriving electrons are timed. An improvement can be obtained using the cluster timing technique, i.e. by timing all arriving ionisation clusters and so reconstructing their distribution along the ionisation track [96].

4.3 Electronics

In order to permit the detection of single ionisation clusters, the electronic read-out interface has to process high speed signals. For this purpose, a specific high performance 8-channels front-end electronics (FE) has been designed with commercial devices such as fast operational amplifiers. This FE was designed for a gain which must produce a suitable read-out signal for further processing, low power consumption, a bandwidth adequate to the expected signal spectral density and a fast pulse rise time response, to exploit the cluster timing technique [99, 100].

The FE single channel schematic is represented in Fig. 27. The input network provides decoupling and protection, while signal amplification is realized with a double gain stage made from ADA4927 and THS4509. Analog Device’s op-amp ADA4927 [101] works as a first gain stage: it is a low noise, ultra-low distortion, high speed, current feedback differential amplifier. The current feedback architecture provides a loop gain that is nearly independent of the closed-loop gain, achieving wide bandwidth, low distortion, low noise (input voltage noise of only 1.3nV/\(\sqrt{\hbox {Hz}}\) at high gains) and lower power consumption than comparable voltage feedback amplifiers. The THS4509 [102] by Texas Instruments is used as a second gain stage and output driver. It is a wide-band, fully differential operational amplifier with a very low noise (1.9 nV/\(\sqrt{\hbox {Hz}}\)), and extremely low harmonic distortion of \(-75\) dBc \(\mathrm {HD}_2\) and \(-80\) dBc \(\mathrm {HD}_3\) at 100 MHz. The slew-rate is 6600 V \(\upmu \hbox {s}^{-1}\) with a settling time of 2 ns to 1% for a 2 V step; it is ideal for pulsed applications. The output of the FE is differential, in order to improve the noise immunity and it is connected to the WaveDREAM Board [103] through a custom cable 5 m long, designed to have a stable, flat frequency response (Amphenol Spectra Strip SkewClear [104]). This cable is made from shielded parallel pairs, each pair being individually shielded; an overall ground jacket is also present, giving a maximum attenuation of 0.75 dB \(\hbox {m}^{-1}\) at 625 MHz.

In order to balance the attenuation of the output cable, a pre-emphasis on both gain stages has been implemented. The pre-emphasis introduces a high frequency peak that compensates the output cable losses resulting in a total bandwidth of nearly 1 GHz.

The FE electronics boards are placed in each sector of CDCH; in Fig. 28 the end-plate mechanical scheme, in which the boards will be inserted, is shown. Due to the area of the FE output connector socket and considering the available space between the layers, three different board versions have been designed, one with the output connector on the right, one in the centre and one on the left.

Pre-amplified differential signals are successively digitised by the WaveDREAM board at a (programmable) speed of 2 GSPS (Giga-samples per second) with an analogue bandwidth of 1 GHz [103].

The current consumption for each channel is 60 mA at a voltage supply of \(\pm \,2.5\) V; this correspond to a total power dissipation per end-plate of about 300 W, therefore an appropriate cooling system relying both on recirculation of coolant fluid and on forced air is foreseen.

4.4 The wiring procedure

A wiring system robot [105] has been designed and assembled in the clean room (see Fig. 29). It allows to automatically stretch the wires on PCB frames, keeping under control the wire tension and pitch parameters; moreover the system fixes the wires on the PCB by a contact-less soldering. Since CDCH has a high wire density (12 wires/\(\hbox {cm}^{2}\)), the classical feed-through technique, as a wire anchoring system, is hard to implement, therefore the development of a new wiring strategy was required.

The wiring robot has been designed with the following goals:

managing a very large number of densely spaced wires,

applying the wire mechanical tension and maintaining it constant and uniform throughout all the winding process,

monitoring the wire positions and their alignments within a few tens \(\upmu \hbox {m}\),

fixing the wires on the PCB with a contact-less soldering system and

monitoring the solder quality of the wires to the supporting PCBs.

These requirements are satisfied by the following three systems:

1.

A wiring system that uses a semi-automatic machine to simultaneously stretch the multi-wire layer with a high degree of control on the wire mechanical tension (better than 0.2 g) and on the wire position (of the order of \(20\,\upmu \text {m}\)) .

2.

A soldering system composed of an infrared (IR) laser soldering system and tin-feeder.

3.

An automatic handling system which extracts the multi-wire layers from the wiring system and places them in a storage/transport frame.

A dedicated LabView® software [105], based on a CompactRIO platform [106], controls the three systems simultaneously, sequencing and synchronising all the different operations.

4.4.1 Wiring system

The purpose of the wiring system is the winding of a multi-wire layer consisting of 32 parallel wires at any stereo angle. In order to achieve a multi-wire layer (see Fig. 30), two PCBs, aligned and oriented at the proper stereo angle, are placed back-to-back on the winding cylinder. The multi-wire layer is obtained in a single operation by winding along a helical path the same wire 32 times around the cylinder with a pitch corresponding to the wire PCBs spacing. The correct pitch is achieved by a system of synchronised stepping motors, through the CompactRIO system and controlled by a digital camera with position accuracy of the order of \(20\,\upmu \text {m}\). The wire mechanical tension is monitored by a high precision strain gauge and corrected with a real-time feedback system acting on the wire spool electromagnetic brake.

Top: the distribution of the wire tension during the winding. Bottom: average wire tension for each turn of the winding cylinder

The wire tension variations are of the order of \(8\%\), without the feedback system, because of the mechanical tolerances. The feedback system reduces these variations to about \(1\%\), reaching the values listed in Table 2 (see Fig. 31).

4.4.2 Soldering system

The soldering phase is accomplished by an IR laser soldering system (LASCON Hybrid with a solder wire feeder [107]). Each wire is fixed at both ends while still constrained around the winding cylinder under its own tension. The laser system is controlled by the CompactRIO and it is synchronised with the positioning system by using a pattern matching software to localise the soldering pad. All the soldering parameters (temperature, soldering time, solder wire length and feeding speed) are defined through a proper script.

4.4.3 Automatic handling system

The wound layer of soldered wires around the cylinder is unrolled and detensioned for storage and transport. This is accomplished with an automatic device. The first wire PCB is lifted off from the cylinder surface with a linear actuator connected to a set of vacuum operated suction cups and placed on the storage and transport frame. The unrolling is accomplished by synchronising the cylinder rotation with the linear displacement of the frame. Once the layer of soldered wires is completely unrolled, the second wire PCB is lifted off from the cylinder, as the first one, and placed on the frame. The frame hosts two supports made of polycarbonate, dedicated to holding the wire PCBs at the correct position by means of nylon screws. One of the two supports can slide into the frame by adjusting the wire length, with a longitudinal threaded rod. The wiring information relative to each frame is stored in a database. Then the wires on the frame are examined, stored and prepared for transportation to the CDCH assembly station.

4.5 The assembling procedure

The assembly of the drift chamber is as critical as the wiring phase and has to be performed under very carefully controlled conditions [108]. In fact, to reach the required accuracy on the drift chamber geometry and to avoid over-tensioning of wires, it is necessary to measure the position of the end-plates to better than \(100\,\upmu \text {m}\). For example, an error of 1\(^{\circ }\) on the twist angle can correspond to an extra elongation of the wire of about 1 mm. It is therefore very important to have accurate position measurements over the chamber length of \({\sim }\) 2 m. For this reason, the assembly is performed by using a coordinate measuring machine; the machine, a DEA Ghibli [109], has a maximum machine travel distance of 2500 mm \( \times \) 1500 mm \( \times \) 1000 mm and a nominal accuracy of \(5\,\upmu \text {m}\) with a contact measuring tool. The measurements of the positions of the PCBs are performed using an optical tool for the identification of the cross marks placed on the PCBs. The accuracy of the optical measurement is \({\sim } 20\,\upmu \text {m}\) in the horizontal plane and (making use of the focal distance of the optics) \({\sim } 40\,\upmu \text {m}\) on the vertical axis.

The first test on the wire trays is a quick measurement of the elongation-tension curve in the proximity of the working point. In this test the wire elongation is measured with the optical tool of the measuring machine and the wire tension is measured both by acoustic and electrical methods. In the acoustic method a periodic signal at a frequency close to the wire resonance is measured in the readout circuit by applying a HV difference between two adjacent wires and by using an acoustic source to excite the wires’ oscillation. This system has the ability of measuring simultaneously up to 16 wires. In the electrical method the wire oscillation is forced by applying a HV signal at a known frequency. The mutual capacitance variation between two adjacent wires is then measured during a HV frequency scan on an external auto oscillating circuit connected to the wires.

The drift chamber assembly is performed in safe conditions with unstretched wires: the distance between the end-plates is fixed at 1906 mm, 6 mm less than the nominal length (1912 mm) and 2 mm less than the untensioned wire length. The positioning of the wire trays on the drift chamber is done in a well-constrained way using a rocker arm, shown in Fig. 26.

The wire tray is first engaged to the rocker arm by means of two precision pins fitting two PCB holes and a clip. The rocker arm is then engaged to a support that leaves it free to rotate and transfers the wire tray on the end-plates between two spokes. The final positioning is driven by hand though dedicated nippers. The wire PCBs are glued on the PEEK spacers with double sided tape previously applied on the inner layer. The PEEK spacers are needed to separate the layers at the right distance. Two pressing arches are used for ensuring a good adhesion of the tape.

The entire drift chamber with all layers mounted. The hyperbolic profile of the chamber is visible

In Fig. 32 we show the picture of the drift chamber after assembled the \(80\%\) of the layers, the crossing of the layers in the two stereo views is shown in the box, while Fig. 33 shows the hyperbolic profile of the drift chamber with all layers mounted.

4.6 Calibration and monitoring

Michel events represent the natural way to continuously and fully characterise the spectrometer with dedicated pre-scaled triggers. The Michel positrons at the edge of the continuous energy spectrum are actually used to perform the alignment of the spectrometer, to define the energy scale of the detector and to extract all the positron kinematic variable resolutions (energy, time and angular variable resolutions).

4.6.1 The Mott monochromatic positron beam

The continuous Michel positron spectrum makes the calibration difficult and subject to significant systematic errors, while delivering mono-energetic positrons would bring important advantages.

Positrons are an abundant component of the MEG/MEG II beam (eight times more intense than the \(\mathrm {\mu ^+}\)-surface component, but they are normally separated and rejected). Turning the muon beam into a positron beam line and tuning the positron momentum very close to the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) signal energy (\(p_{\mathrm {e}^{+}}{\sim } 53\,\text {MeV}/\mathrm{c}\)), a quasi-monochromatic intense beam (\(\sigma ^\mathrm {beam}_{p_\mathrm {e}^{+}} {\sim } 250,\text {keV}/\mathrm{c}\), \(I_{\mathrm {e}^{+}}\)\({\sim } 10^{7}\hbox {e}^{+}/\hbox {s}\)) can be Mott scattered on the light nuclei present in the muon stopping target, providing a very useful \({\mathrm {e}^{+}}\)-line for a full understanding of the spectrometer from alignment to the positron kinematic variables’ resolution.

The merits of the method, some of them unique, can be listed as

Spectrometer absolute energy scale determination.

Spectrometer alignment: the alignment is performed as an iterative procedure on the residuals of the expected and measured hits of the tracks. The alignment is executed with the detector under normal running conditions (i.e. with the magnetic field on) using curved tracks having monochromatic energy which simplify the procedure.

Spectrometer checks: the well known relative dependence of the Mott scattered positron-momentum on the angular variables \(\phi _{\mathrm {e}^{+}}\) and \(\theta _\mathrm {e}^{+}\) makes possible a detailed investigation of the spectrometer, any distortion would signal deviation from the expected detector behaviour.

Spectrometer acceptance: the well known Mott cross section permits the direct measurement of the spectrometer acceptance.

Independent check of the muon polarisation: the comparison of the Michel versus Mott \(\theta _\mathrm {e}^{+}\)-distribution, after taking into account the \(\theta \) cross-section dependence of the Mott events, allows a cross-check of the muon polarisation at the Mott positron energy.

Positron momentum and angular resolutions: positron momentum and angular resolutions are extracted using double-turn track events. The double-turn track is divided in two independent tracks, the two tracks are propagated towards the target and the difference between the relevant observable (i.e. the \(p_{\mathrm {e}^{+}}\), \(\phi _{\mathrm {e}^{+}}\) or \(\theta _\mathrm {e}^{+}\) variable) is computed.

As final remarks it should be noted that the high Mott positron rate enables for a fast calibration, the method does not require a dedicated target (i.e. the Mott target is the MEG II muon stopping target) and does not need additional beam infrastructures.

The potential of this method has been proven using dedicated beam tests performed at the \(\pi \)E5 beam line (i.e. the MEG II beam line) with the MEG spectrometer in 2012. Figure 34 shows the good agreement between the Mott \({\mathrm {e}^{+}}\)-line (black dot points) and the Monte Carlo (MC) simulation prediction (red dashed area). The data are fitted with a double Gaussian function: one taking into account the core of the distribution and one the low energy tail. With the beam momentum slits virtually “fully closed” we get a line centred at \(\hat{E}_{\mathrm {e}^{+}}= {(51.840\pm 0.003)}\,{\hbox {MeV}}\) with a width \(\sigma _{E_{\mathrm {e}^{+}}}^{\mathrm {core}} = (412\pm 10)\) KeV.

The ability of performing the spectrometer alignment and obtaining consistent results can be seen in Table 3 which shows a reconstructed set of Mott data taken in 2013 based on the Michel alignment versus Mott alignment: both the mean energy and width are compared. The two data sets are in good agreement. The two different methods allow different systematic errors to be identified.

Similarly a comparison between the \(p_{\mathrm {e}^{+}}\) and angular variable resolutions extracted using the double-turn track method applied to the Mott sample and the Michel sample has also been performed. An example of the \(\theta _\mathrm {e}^{+}\)-angular distribution obtained using the Mott sample and applying the double-turn method is shown in Fig. 35. Actually the double turn resolutions on all positron variables measured with the Mott sample were found to be similar or even better (up to \(20\%\)) than that measured in the Michel data. The difference has been understood in terms of the different pile-up conditions in which the spectrometer works in the two cases. This is another example in which independent methods complement each other for a better understanding of the detector. Figure 36 shows that the method is very sensitive to misalignment. The red points show the expected dependence of the reconstructed \(E_{\mathrm {e}^{+}}\) versus the reconstructed \(\phi _{\mathrm {e}^{+}}\); the green points show the same measurement in presence of an erroneous set of survey data used as input to the alignment procedure; the plot highlights unambiguously the problem. It is also possible to reproduce the plot in the simulation when using inconsistent alignment data (see the yellow points). These results validate the method as a standard calibration tool for MEG II.

The distribution of \(\mathrm {\varDelta } \theta = \theta _1 - \theta _2\) as obtained using the Mott data sample and the double-turn method, where \(\theta _1\) and \(\theta _2\) are the reconstructed \(\theta \)-angles associated with the first and second part of a double-turn track, respectively. The distribution is fitted with a double Gaussian function

Reconstructed Mott positron energy versus reconstructed \(\phi \)-angle. Under normal functioning conditions the trend of energy versus the \(\phi \)-angle is flat (red points). If some distortions are present, deviations are observed, as shown in the case of green and yellow points. See the text for more details

4.7 Expected performances

As preliminary tests, the spatial resolution and the ageing properties of the chamber have been measured on prototypes. For a precise measurement of the single-hit resolution, several drift chamber prototypes were tested in a cosmic ray facility set-up [97, 110], and an example result is shown in Fig. 37. In these tests, total bandwidth was 700 MHz, because of limitation in the waveform digitiser. Expected biases and resolution tails are observed, due to the poor ionisation statistics in the very light helium-based gas mixture. Despite the presence of these tails, the bulk of the resolution function has a Gaussian shape, with a width of \(\sigma _r\simeq 110\,\upmu \text {m}\), averaged over a large range of angles and impact parameters. Since the longitudinal coordinate of hits is determined by exploiting the stereo angle, the corresponding resolution is then expected to be \(\sigma _z=\sigma _r/\sin \theta _\mathrm {s}\simeq 1\,\mathrm{mm}\). However in the final chamber further improvements are expected due to the new front-end electronics with a 1 GHz bandwidth allowing for the exploitation of the cluster timing technique.

CDCH single hit resolution function, measured on a prototype in a cosmic ray facility, as the difference between the measured drift distance x and the particle’s impact parameter b. A fit is performed with a Gaussian core function of mean \(\mu \) and width \(\sigma \), analytically matched with an exponential tail starting at \(\mu + \delta \) (see [97] for more details)

The operation and performance of the chamber will also be affected by the extremely high positron rate in CDCH (up to \({\sim }\) 30 kHz \(\hbox {cm}^{-2}\)), which will induce huge amount of charge collected in the hottest portion of the innermost wire (\({\sim }\) 0.5 C \(\hbox {cm}^{-1}\)). Since at such values of collected charge wire chambers can present inefficiencies and gain degradation, laboratory tests on prototypes in a dedicated irradiation facility were performed [97]. Those tests returned sustainable gain degradation of less than 20% per DAQ year (see Fig. 38) in the hottest few centimetres of the innermost wires. Those local gain degradations could be eventually recovered by increasing the voltage of the affected wires, at the cost of a gain increase at the wire ends, where the track occupancy is lower.

The expected CDCH performance compared to the MEG DCH system is summarised in Table 4. The resolutions are obtained using the results of tests with prototypes as input for the simulation of the detector (under the assumption of Gaussian single hit resolutions), and cross-checked with a full simulation of the detector response (which also accounts for non-Gaussian tails). Non-Gaussian tails are observed in the resolution functions, as expected from Coulomb scattering at large angles and from energy-loss fluctuations. Core resolutions are shown in Table 4, but the full resolution functions have been used for the estimate of the MEG II sensitivity.

In the table we quote separately the efficiency for tracking a signal positron and the probability that such a positron reaches the pTC in a place that can be geometrically matched to the reconstructed track (matching efficiency). In MEG, the matching efficiency was limited by the positron scattering on service materials (electronics, cables, etc.) in the volume between the drift chambers and the timing counter. The new design significantly reduces this loss of efficiency, and the estimated transparency toward the pTC is doubled. The preliminary estimate of the tracking efficiency in MEG II is expected to improve with further developments of the reconstruction algorithms.

5 Pixelated timing counter

Precise measurement of the time coincidence of \(\mathrm {e}^+\gamma \) pairs is one of the important features of the MEG II experiment in order to suppress the predominant accidental background. The positron time \(t_{\mathrm {e}^{+}}\) must be precisely measured by the pixelated timing counter (pTC), succeeding the MEG timing counter, with a resolution \(\sigma _{t_{\mathrm {e}^{+}}}{\sim }\,30 \hbox { ps}\) at a hit rate \({\sim }5\) MHz. In addition, it also generates trigger signals by providing prompt timing and direction information on the positron.

5.1 Limitations of the MEG timing counter

In the past decades, timing detectors based on scintillation counters with PMT read-out have been built and operated successfully. The best achievements with this technique gave time resolutions slightly better than 50 ps for a minimum ionising particle (e.g. [111, 112]). One of them was the MEG timing counter consisting of 30 scintillator bars (BC-404 with 80 \(\times \) 4 \(\times \) 4 \(\hbox {cm}^{3}\) dimensions) each of which read out by fine mesh PMTs at both ends [113]. It showed a good intrinsic time resolution of 40 ps in beam tests, but the operative time resolution on the experimental floor was measured to be \(\sigma _{t_{\mathrm {e}^{+}}} {\sim }70{\hbox { ps}}\). The main causes of the degradation were:

1.

a large variation of the optical photon paths originating from the large size of the scintillator (long longitudinal propagation and incident-angle dependence due to its thickness),

2.

a degradation of the PMT performance in the MEG magnetic field,

3.

the error of timing alignment among the bars (time calibration) and

4.

the electronic time jitter.

The sum of all these contributions accounted for the above mentioned operating timing resolution.

Furthermore, a positron crossing a bar sometimes impinged on the same bar again while moving along its approximately helical trajectory. Such double-hit events produced a tail component in the timing response function.

Finally, since the PMTs worked at the far edge of its performance versus single event rate (1 MHz per PMT), the designed increase of the muon stopping would have required a segmentation of at least the same factor with respect to the present configuration in order to preserve the proper PMT working point.

5.2 Upgrade concept

We plan to overcome such limitations by a detector based on a new concept: a highly segmented scintillation counter. In the new configuration, the 30 scintillator bars are replaced by 512 small scintillation tiles; we call the new detector pixelated timing counter (pTC). There are several advantages in this design over the previous one:

1.

The single counter can easily have a good time resolution due to the small dimensions.

2.

The hit rate of each counter is under control to keep the pile-up probability as well as the ‘double-hit’ probability negligibly low.

3.

Each particle’ s time is measured with many counters to significantly improve the total time resolution.

4.

A flexible detector layout is possible to maximise the detection efficiency and the hit multiplicity.

The third point is of particular importance: by properly combining the times measured by \(N_\mathrm {hit}\) counters, the total time resolution is expected to improve as

where \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {single}\) is the total time resolution of a single-counter measurement which includes the counter intrinsic resolution \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {counter}\), the error in time alignment over the counters \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {inter{\text {-}}counter}\) and the electronics jitter \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {elec}\). The contribution of the multiple Coulomb scattering, which does not scale linearly with \(\sqrt{N_\mathrm {hit}}\), is negligible. Therefore, the multi-hit-measurement approach overcomes most of the limitations mentioned above and is superior to pursuing the ultimate time resolution of a single device. Note that to properly combine the hit times, the positron propagation times between the counters have to be well known; the trajectory extrapolated from CDCH is used as well as refinement of it by the reconstructed counter hit positions.

This pixelated design became possible by using a new type of solid state photo-sensor: the silicon photomultiplier (SiPM), that is a valuable replacement of the conventional PMT because of its excellent properties as listed below:

The compactness and low cost of SiPMs allows a high segmentation and together with the high immunity to magnetic fields enables flexible design of the counter layout without deterioration of the performance in the COBRA field. A high time resolution of a SiPM-based scintillation counter was demonstrated in [114] prior to designing MEG II. It should also be the best solution for the read-out of the pixel module in this detector.

5.3 Design

The pTC consists of two semi-cylindrical super-modules like the previous ones, mirror symmetric to each other and placed upstream and downstream in the COBRA spectrometer. Figure 39 shows one of the super-modules composed of 256 counters fitted to the space between the CDCH and the COBRA magnet. The volume is separated from the CDCH, with the pTC modules placed in air.

Each counter is a small ultra-fast scintillator tile with SiPM read-out described in detail in Sect. 5.3.1. Sixteen counters align in the longitudinal (z) direction at a 5.5 cm interval, and 16 lines are cylindrically arranged at a 10.3\(^{\circ }\) interval, alternately staggered by a half counter. The counters are tilted at 45\(^{\circ }\) to be approximately perpendicular to the signal \({\mathrm {e}^{+}}\) trajectories. The total longitudinal and \(\phi \) coverages are \(23.0<|z|< 116.7 \,\hbox {cm}\) and \(-165.8^{\circ }<\phi < +5.2^{\circ }\), respectively, which fully cover the angular acceptance of the \({\mathrm {e}^{+}}\) from \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decays when the photon points to the LXe photon detector. This counter configuration was determined via a MC study to maximise the experimental sensitivity (given by the detection efficiency and the total time resolution) within the constraint of a limited number of electronics read-out channels (1024 channels in total).

5.3.1 Counter module design

The single counter is composed of a scintillator tile and multiple SiPMs. The counter dimensions are defined by the length (L), width (W), and thickness (T) of the scintillator tile and described as \(L\times W\times T\) below. Multiple SiPMs are optically coupled to each \(W\times T\) side of the scintillator. The signals from the SiPMs on each end are summed up and fed to one readout channel. The \({\mathrm {e}^{+}}\) impact time at each counter is obtained by averaging the times measured at both ends.

We performed an extensive study to optimise the single counter design, starting from a comparative study of scintillator material, SiPM models, number of SiPMs per counter, and connection scheme. Then, an optimisation of the scintillator geometry was performed to find the best compromise between the total resolution, detection efficiency and required number of channels. The results are reported in detail in [115, 116, 117, 118] and summarised below.

5.3.1.1 Scintillator The choice of the scintillator material is crucial to optimise the time resolution. The candidates selected from the viewpoint of light yield, rise- and decay-times, and emission spectrum are the ultra-fast plastic scintillators from Saint-Gobain listed in Table 5. Note that the smaller counter dimensions allow the use of such very short rise time scintillators, which typically have short attenuation lengths. The time resolutions were measured for all types of scintillator and different sizes. BC-422 was found to always give the highest time resolution for each size (tested up to 120 \(\times \) 40 \(\times \) 5 mm) and therefore was chosen.

Table 5

Properties of ultra-fast plastic scintillators from Saint-Gobain. The properties of BC-404, which was used in the previous timing counter bar, is also shown for comparison

Different types of reflectors such as no reflector, Teflon® tape, aluminised Mylar® and enhanced specular reflector (ESR) from 3M were tested to improve the light collection and hence the time resolution. The best time resolution was obtained with ESR film, while a small worsening was observed with Teflon tape (diffuse reflector) compared to no reflector [115].

5.3.1.2 SiPM The photo-sensors must be sensitive to the scintillation light in the near-ultraviolet (NUV) range. Recently, several manufacturers have developed such NUV-sensitive SiPMs based on ‘p-on-n’ diode structures. Therefore, we tested a number of such NUV-sensitive SiPMs available as of 2013 from AdvanSiD (ASD), Hamamatsu Photonics, KETEK, and SensL.

Before the decision of SiPM models, we first examined the schemes of SiPM connection. In order to compensate the small active area of SiPMs, multiple SiPMs are connected in parallel for read-out. However, performance issues for the parallel connection are: increase in the signal rise time and width and increase in the parallel and series noise; both originate from the larger sensor capacitance and negatively affect the time resolution. We have examined an alternative connection: series connection of multiple SiPMs (\(N_\mathrm {SiPM}=3-6\)).7 Figure 40 shows a comparison of time resolutions between series and parallel connections. Series connection gives better time resolutions at all over-voltages.8 This is due to the narrower output pulse shape because of the reduced total sensor capacitance in the series circuit. Although the total charge (gain) is reduced to \(1/N_\mathrm {SiPM}\) of that of a single SiPM, the signal amplitude (pulse height) is kept comparable (compensated by the \(N_\mathrm {SiPM}\) times faster decay time). Thus, we conclude that series connection is better for the pTC application. We simply connect SiPMs in series on a custom print circuit board (PCB) while we adopt a more complex way for the MPPCs used in LXe photon detector (see Sect. 6.2.7).

Comparison of time resolutions between series and parallel connections measured with 60 \(\times \) 30 \(\times \) 5 \(\hbox {mm}^{3}\) sized counter read-out with 3 SiPMs (S10362-33-050C) from Hamamatsu Photonics at each end (from [118])

For each type of SiPM, we measured the device characteristics (such as dark count rate, cross-talk probability, PDE, and temperature dependence) and the time resolution when coupled to a scintillator. The main results are shown in Fig. 41. The best time resolution is obtained with SiPMs from Hamamatsu Photonics, which have the highest PDE. This result indicates that the time resolution of our counter is predominantly limited by the photon statistics and increasing the number of detected photons is the most important and straightforward way of improving the time resolution. Using higher PDE SiPMs is one way.

Another way is increasing the sensor coverage by using more SiPMs. Figure 42 shows the time resolution measured with different numbers of SiPMs. In this study, SiPMs from ASD were used. A clear improvement with a larger number of SiPMs is observed, and a time resolution of 50 ps is achieved with 6 SiPMs at each end coupled to 90 \(\times \) 40 \(\times \) 5 mm scintillator. This is better than that achieved with 3 SiPMs from Hamamatsu Photonics (58 ps). The question as to how many sensors can be used depends on the final geometry of the detector and cost, so the decision of the SiPM model and the number was made after fixing those parameters. We finally adopted the 6-series solution using ASD SiPMs, which gives the best performance within our budget constraint.

Time resolution measured with different numbers of SiPMs. 3, 5, and 6 SiPMs (ASD-NUV3S-P-50) connected in series and coupled to each end of 90 \(\times 40 \times 5\) mm scintillator (from [118])

The model used in the pTC is the ASD-NUV3S-P-High-Gain; the specifications provided by AdvanSiD are listed in Table 6. Figure 43 shows the measured single-cell-fired signal. The SiPM’s specific long exponential tail, with a time constant of 124 ns, is due to the recharge (recovery) current determined predominantly by the quench resistance and the cell capacitance, which are measured to be \({R_q} = ({1100\pm 50})\hbox { k}\varOmega \) and \(C_D = {(100\pm 10)}{\hbox { fF}}\), respectively.

Pulse shape of a single-cell-fired signal from an ASD-NUV3S-P-High-Gain (with a gain 60 amplifier). The black line shows the averaged pulse shape and the red curve is the best fit function

5.3.1.3 Geometry The single counter time resolution was measured for different sized scintillator tiles, and the results are shown in Fig. 44. The size dependence is understandable from the photon statistics expected from the sensor coverage to the scintillator cross-section (dependent on W) and the light attenuation in the scintillator (on L).

Dependence of the counter time resolution on the size measured with 3 SiPMs Hamamatsu Photonics (S10362-33-050C) at each end. The superimposed curves show the dependence expected from the detected photon statistics. The shaded bands show the uncertainty. See [117] for the detailed description

The size has to be optimised by a balance between single-counter resolution (smaller is better) and hit multiplicity and detection efficiency (larger is better). This optimisation is performed via a MC simulation study using the measured single-counter resolutions. As a result, longer counters (up to the measured maximum \(L = 120\,\mathrm{mm}\)) are found to give a better performance. Considering the hit rate and the double-hit probability, we did not test longer counters and fixed the length to be \(L=120\,\mathrm{mm}\). The optimal counter width W is dependent on the longitudinal position because the radial spread of the signal \({\mathrm {e}^{+}}\) trajectories depends on the longitudinal position in the pTC region. We adopt two different widths: \(W= 40\) and 50 mm. The \(W=50\,\mathrm{mm}\) counters are assigned to the middle longitudinal position (see Fig. 39) where the radial spread becomes large. We observed a moderate dependence of the resolution on the thickness T and decided for \(T=5\,\mathrm{mm}\), which is sufficiently thick to match the SiPM active area. A 5 mm thick scintillator causes a deflection of 50 MeV positron direction for \(\theta ^\mathrm {RMS}_\mathrm {MS}{\sim }25\,\hbox {mrad}\), whose impact on the propagation time estimation is estimated to be \({\sim }5\) ps, negligibly small compared to the counter resolution.

5.3.1.4 Final design of the counter module Figure 45 shows examples of the final counter modules. A counter consists of a tile of BC-422 with dimensions of \(L\times W\times T=120\times (40~ \mathrm {or}~50) \times 5\,\hbox {mm}^{3}\) and 12 ASD SiPMs, 6 on each \((W\times T)\)-side, directly coupled to the scintillator with optical cement (BC-600). The scintillator is wrapped in \(32\,\upmu \text {m}\) thick ESR film, and then the module is wrapped in a \(25\,\upmu \text {m}\) thick black sheet of Tedlar for light tightness.

Picture showing both types of counter modules. Left: \(W=40\,\mathrm{mm}\) counter wrapped in the reflector (example with the L-shaped PCB). Right: \(W=50\,\mathrm{mm}\) counter with optical fibre before wrapping in the reflector

Figure 46 shows the PCBs on which the SiPMs are soldered. The L-shaped PCBs are used for the counters at the inner (small |z|) location where the radial space is more restricted because of the smaller inner diameter of the magnet coils. Parts made of aluminium are attached to the PCBs and thermally coupled to one of the metal layers on the PCBs. They are used not only to mechanically fix the counters but also to thermally link the SiPMs to the main support structure whose temperature is controlled by a chiller system.

Each counter except for the counters using the L-shaped PCBs is equipped with an optical fibre for the laser calibration (described in Sect. 5.5.2).

5.3.2 Read-out chain

The basic idea of the read-out scheme is to send the raw SiPM-output signals directly to the WaveDREAM read-out boards (WDBs) (see Sect. 8), on which the signals are amplified, shaped, and digitised. Hence, the SiPMs and amplifiers are separated by long cables without any pre-amplification. This approach is adopted for both simplification and for space and power consumption reasons. The reduction of the sensor capacitance by the series connection allows 50 \(\varOmega \) transmission without significant broadening of the pulse.

The SiPM mounting PCBs. The top one is for the 50 mm counters and the others are for the 40 mm ones

The counter modules are mounted on 1 m long custom PCBs (back-planes) placed on the mechanical support structures, allowing the signals to be transmitted outside the spectrometer. The back-planes have coaxial-like signal lines with a 50 \(\varOmega \) characteristic impedance. The ground lines are independent of each other to avoid possible ground loops. The signals are then transmitted to the WDBs on 7 m long non-magnetic RG-178 type coaxial cables (Radiall C291 140 087). MCX connectors are used for all connections.

The SiPM bias voltages, typically 164 V for the six ASD SiPMs in series, are supplied from the WDBs through the signal lines; only one cable per channel.

The input signals are amplified by a factor 100 at the analogue part of WDBs. It turns out to be very important to eliminate the long time constant component of the SiPM output pulse for a precise time measurement in order to suppress the effect of dark counts and obtain a stable baseline, especially after some radiation damage. For this reason a pole-zero cancellation circuit is incorporated on the WDB.

The amplified and shaped waveforms are digitised at a sampling frequency of 2 GSPS by the DRS4 chips on the WDBs for a detailed offline analysis of the pulses in order to compute the precise signal times.

5.3.3 Mechanical support structure

The mechanical support structures shown in Fig. 47 are made of aluminium cylinders with inner and outer radii of 380 mm and 398 mm, respectively. The back-planes are fit to grooves machined on the structures. A hole is drilled below the centre of each counter to pass an optical fibre from underneath. Cooling-water pipes are laid on the outer side of the structure and connected to the chiller to keep the temperature below 30 \(^{\circ }\)C with a stability better than 1 \(^{\circ }\)C.9

An example of a hit pattern by a simulated signal \({\mathrm {e}^{+}}\). CDCH is not drawn in these figures

5.4 Hit distribution and rate

A MC simulation based on Geant4 (version 10.0) [123, 124, 125] is performed with the final detector configuration to evaluate the hit distribution and hit rates. Figure 48 shows an example of a hit pattern in the pTC by a \({\mathrm {e}^{+}}\)from a \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay. Figure 49 shows the distribution of the number of hit counters for signal positrons generated in the angular acceptance.10 The mean hit multiplicity is evaluated to be \(\bar{N}_\mathrm {hit}=9.3\).

Distribution of the expected number of hit counters for signal positrons, from a MC simulation

The hit rates at the individual counters are estimated by a simulation of a \(\mathrm {\mu ^+}\)-beam (at a rate \(9\times 10^{7}\,\hbox {s}^{-1}\))11 which then decay in accordance with the SM calculation. The result is shown in Fig. 50 as a function of z-position of the counters. The rates are position dependent, and the maximum is 160 kHz. This result is confirmed by measurements in the pilot run described in Sect. 5.6.

Counter hit rate under MEG II beam conditions, as a function of counter z-position. The black squares are from the MC simulation and the red circles are from the pilot run. The points with zero hit rate are due to dead channels in the readout electronics (from [126])

5.5 Calibration methods

It is important to precisely synchronise all the counters, although the effect of the misalignment of the individual counter times can be diluted by taking the average over the multiple hit counters as seen from Eq. (7). Considering the dilution effect, \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {inter{\text {-}}counter}{\sim }30 \hbox { ps}\) is required for the precision of each counter time-alignment. Two schemes are under development for the inter-counter time-alignment. They are complementary to each other and have independent systematic errors.

5.5.1 Track-based method

High momentum Michel positrons pass through more than one counter as would signal positrons. Multiple hits allow time-alignment between adjacent counters after correcting for the \({\mathrm {e}^{+}}\) travel time between hits. The track information analysed by the CDCH can be used for a precise correction of the travel time. Generally, the track-based method provides very precise results; \(O(\mathrm {ps})\) is achieved in a study using a MC simulation. However, this method is subject to systematic position-dependent biases caused by small systematic errors in the travel time estimation. Such biases will be detected and corrected for by the laser-based method detailed in Sect. 5.5.2. Furthermore, this method cannot be used to synchronise the two super-modules.

5.5.2 Laser-based method

The counters can also be time-aligned by distributing synchronous light pulses to all the counters through optical fibres. To this goal, we have developed a laser calibration system shown schematically in Fig. 51 (see also [127]). Ideally we should be distributing laser light to all counters, however, it turned out to be impossible to install optical fibres to those at the innermost location (in total 80 counters) due to space limitations. For those counters, we rely on the calibration by the track-based method detailed in Sect. 5.5.1.

The light source is a PLP10-040 [128], with an emission wavelength at 401 nm, pulse width of 60 ps (FWHM) and peak power of 200 mW. The fast light pulse is first split into two outputs; one is directed to a photodiode to gauge the signal amplitude, and the other serves as an input to an active optical multiplexer [129] with nine output channels and remotely controlled, such that the signal is output alternatively to each of them.

Each of the outputs of the multiplexer (except one used as a monitor) is then input to two cascaded stages of \({1 \times 8}\) optical splitters [130], each of which splits the input signal into eight signals of approximately equal output amplitudes. As a result, 64 channels become available in parallel with an amplitude \({{\sim } 1/64}\) of the original (actually smaller due to losses in the various stages). Finally, each output signal from the last stage of splitters is fed into a counter through an optical fibre. Figure 52 shows how the optical fibre is fixed to the scintillator: to stably fix the fibre, a small hole (2.5 mm diameter, 1 mm depth) is drilled into the bottom face of the scintillator, and the ferrule of the fibre is inserted into the hole using a polycarbonate screw and a support bar (ABS resin) across the two PCBs.

As mentioned in Sect. 5.6, we performed pilot runs using the MEG II beam. In the 2016 run, we installed the laser calibration system for 40 counters and tested the system by examining the consistency with the track-based time calibration method detailed in Sect. 5.5.1. The time offset of each counter was calculated independently using both methods. Figure 53 shows the difference between the results of the two methods. The dispersion (39 ps in standard deviation) includes the systematic errors of both methods, therefore, the precision of each method is better. The time difference was stable during the 3-week-run to a \(\sigma = 6 \hbox { ps}\). By combining the two methods, it is possible to calibrate all the counter offsets to a precision better than \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {inter{\text {-}}counter}=30\hbox { ps}\). The average contribution to the inter-counter calibration can be evaluated as \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {inter{\text {-}}counter}/ \sqrt{\bar{N}_\mathrm {hit}}={10}\hbox { ps}\).

Difference of time offsets between the laser-based method and the track-based method at the beginning of the pilot run in 2016. The difference was calculated only for the laser-installed-counters. The standard deviation is 39 ps. Each error bar includes systematic uncertainties of the two methods

5.6 Expected performance

The single-counter performance is evaluated using electrons from a \(^{90}\mathrm {Sr}\) source. All the assembled counters were irradiated by the electrons at three positions (− 45,0 and +45 mm along L) to measure the time resolution, position resolution, light yield, and effective light speed \(v_\mathrm {eff}\). The pulse time of each read-out side is picked-off by the digital-constant-fraction method (\(t_1\) and \(t_2\)). The hit time is then reconstructed by averaging the two times, \((t_1+t_2)/2\), while the hit position along L is reconstructed by the time difference, \((t_1-t_2)\times v_\mathrm {eff}/2\). The mean time resolutions for all assembled counters are 72 ps and 81 ps for \(W=\)40 mm and 50 mm counters, respectively. These are about 15% worse than those obtained with the prototype counters in the R&D phase because of the quality control of SiPMs and scintillators in the mass production phase. The hit position resolution is \(\sigma _{L}\sim 10\,\mathrm{mm}\).

The performance with multiple counters was studied in a series of beam tests carried out at the Beam Test Facility (BTF) at LNF and the \(\mathrm {\pi E5}\) beam channel at PSI. Six to ten prototype counters aligned as a telescope were irradiated by 50 MeV monochromatic positrons at the BTF or by Michel positrons at PSI. The effects of multiple Coulomb scattering and secondary particles, such as \(\delta \)-rays, were examined, and the time resolutions was found to improve by use of multiple counters following closely Eq. (7). Detailed reports are available in [118, 131].

Finally, we performed pilot runs in 2015 and 2016 using the MEG II \(\mathrm {\mu ^+}\) beam and the one-forth system of the pTC (consisting of 128 counters) installed in the COBRA spectrometer. The system was thoroughly tested from the hardware point of view: the geometrical consistency, the installation procedure, and the operation under beam. The laser calibration system was partially implemented and also tested. Data from Michel positrons were also taken with a prototype of the WDBs, under various trigger conditions.

The multi-counter time resolutions are evaluated by an ‘odd–even’ analysis. For a given set of hit counters, hits are alternately grouped into ‘odd’ (\(N_\mathrm {odd}\)) and ‘even’ (\(N_\mathrm {even}\)) by the order of the pixels traversed by the positron, the time difference being defined as (\(N_\mathrm {hit} = N_{\mathrm {even}} + N_{\mathrm {odd}}\))

The standard deviation of \(t_\mathrm {odd-even}(N_\mathrm {hit})\) is used as an estimator of the time resolution for \(N_\mathrm {hit}\) hits and examined for 22 sets of counters. Figure 54 shows the result obtained in the pilot run 2016. The total time resolution improves as Eq. (7) with \(\sigma ^\mathrm {single}_{t_{\mathrm {e}^{+}}}\)=93 ps. At the mean \(\bar{N}_\mathrm {hit} = 9\), \(\sigma _{t_{\mathrm {e}^{+}}}(\bar{N}_\mathrm {hit} =9)=31\) ps was achieved.

The total time resolution vs. the number of hits measured by the ‘odd–even’ analysis in the pilot run 2016. The points are the average of the 22 counter-sets weighted by the event fraction. The red curve is the best fit function of \(\sigma _{t_{\mathrm {e}^{+}}}(N_\mathrm {hit})=\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {single}/\sqrt{N_\mathrm {hit}} \oplus \sigma _{t_{\mathrm {e}^{+}}}^\mathrm {const}\)

5.7 Radiation hardness of SiPMs

The modest radiation hardness of SiPMs is considered as a weak point of SiPMs. Increase of the dark current and change of the gain of SiPMs are typical effects after substantial irradiation. The SiPMs in the pTC will be irradiated by a high flux of Michel positrons during the experiment. The integrated fluence of the Michel positrons during three-years running is estimated to be \(\sim 10^{11}\hbox {e}^{+}/\hbox {cm}^{2}\).12

The PSI \(\mu \)SR group performed irradiation tests using Michel positrons as shown in Fig. 55 [132]. The SiPMs from Hamamatsu Photonics are irradiated by Michel positrons with fluences up to \(2.5\times 10^{11}\,\hbox { cm}^{2}\), which is more than twice higher than MEG II expectation. They observed a significant increase of the dark current by a factor of six and a 15% gain decrease. Interestingly the timing resolution is not significantly changed even with the highest fluence.

Results from the irradiation tests of SiPM from Hamamatsu Photonics (S10362-33-050C) performed by the PSI \(\mu \)SR group. Significant increase of dark current (top) and 15% gain decrease (middle) are observed, while the timing resolution is not significantly changed (bottom). Courtesy from Dr. A. Stoykov of Paul Scherrer Institut

During the pilot run, we observed increases in the SiPM current. By extrapolating the observed increase, the dark current of each channel would reach \(\mathcal {O}(100\,\upmu \hbox {A})\) in the three years run. This is higher than the expectation from the study above. Further studies are necessary to assess the impact of the radiation damage on the timing performance. We plan to carry out irradiation tests of our SiPMs and counter modules using high intensity \(\beta \) sources and test beams such as BTF at LNF.

The SiPMs are also irradiated by neutrons and \(\gamma \)-rays in our experiment. The effect is discussed in detail in Sect. 6.2.2 for the SiPMs planned to be used for the LXe photon detector and it turns out not to influence the performance of SiPMs.

Another possible issue is the temperature stability of the SiPMs. The temperature coefficient of the breakdown voltage for ASD-NUM3S-P-50-High-Gain is 26 mV \(^{\circ }\hbox {C}\); the gain at an over-voltage of 2.5 V changes by 1% for a temperature change of 1 \(^{\circ }\)C. The temperature will be controlled and stabilised to within 1 \(^{\circ }\)C by an air-conditioning system of our detector hut and the cooling water system on the mechanical support structure. Therefore, the temperature dependence of the SiPMs should not be an issue in our case.

6 LXe photon detector

6.1 Upgrade concept

The liquid xenon (LXe) photon detector is a key ingredient to identifying the signal and suppressing the background in the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) search. The influence of the differences in resolutions on the analysis sensitivity is approximately evaluated from Eq. (2). Taking into account the obtained energy resolution of MEG (1.7%) compared with the foreseen one (1.2%), our physics reach of MEG was limited by the LXe detector by a factor of 2. It is, therefore, crucial to substantially improve its performance in MEG II.

The MEG LXe photon detector, shown in Fig. 56, was one of the world’s largest detectors based on LXe scintillation light with 900 l of LXe surrounded by 846 PMTs submerged in liquid to detect the scintillation light in the VUV range (\(\lambda = (175\pm 5)\,{\hbox {nm}}\)). The 2-inch PMT (Hamamatsu Photonics R9869) used in the detector is UV-sensitive with a photo-cathode of K–Cs–Sb and a synthetic quartz window. The quantum efficiency (QE) was about 16% for the LXe scintillation light at a LXe temperature of 165 K.

The photon entrance inner face was covered by 216 PMTs with a minimum spacing between adjacent PMTs. The photo-cathode of the PMT was, however, round-shaped with a diameter of 46 mm which was much smaller than the interval between adjacent PMTs of 62 mm. The performance of the MEG LXe photon detector was limited due to this non-uniform PMT coverage. Figure 57 shows the efficiency of scintillation light collection as a function of the depth of the first interaction for signal photons of 52.8 MeV. The collection efficiency strongly depended on the incident position. The non-uniform response was partly corrected for in the offline analysis, but it still deteriorated the energy and position resolutions due to event-by-event fluctuations of the shower shape, especially for shallow events.

Example of scintillating light distributions detected by photo-sensors in case of (left) PMTs and (right) smaller photo-sensors (12 \(\times \) 12 \(\hbox {mm}^{2}\)) on the inner face for the same MC event

The main concept of the upgrade of the LXe photon detector for MEG II is to reduce this non-uniform response by replacing the PMTs on the inner face with smaller photo-sensors. Figure 58 shows a comparison of how the same event would look for the two cases with the current PMTs and smaller photo-sensors (12 \(\times \) 12 \(\hbox {mm}^{2}\)) on the inner face. The imaging power is greatly improved with smaller photo-sensors. For example, two local energy deposits in the same shower are clearly separated in this event. It turns out that both the energy and position resolutions greatly improve especially for shallow events as shown in Sect. 6.6.

SiPMs are adopted as smaller photo-sensors for the inner face of the MEG II LXe photon detector. The motivation for choosing SiPM is discussed in detail in Sect. 6.2.

The PMTs which were used on the inner face of the MEG LXe photon detector are re-used on the other faces. Detailed MC studies show that the best use of those PMTs is achieved by modifying the layout of the PMTs on the lateral faces. Figure 59 illustrates the modified layout viewed on a r-z section. The inner face extends along z, outside the acceptance region by 10% on each side. The extended volume reduces the energy leakage for events near the lateral walls. The PMTs on the lateral faces are tilted such that all the photo-cathodes lie in the same plane. This configuration minimises the effect of leakage due to shower fluctuations for events near the lateral walls. The energy resolution is thus improved especially for those events.

MEG (left) and MEG II (right) layouts of the PMTs viewed in an r-z section

6.2 Development of VUV-sensitive MPPC

6.2.1 MPPC advantage

The MPPC® (Multi-Pixel Photon Counter), a new type of photon counting device produced by Hamamatsu Photonics, is a type of SiPM device. The MPPC has many excellent features suited for the MEG II experiment. It is insensitive to magnetic fields and is sensitive to single photons, which enables an easier and more reliable calibration of the detector. Moreover, a finer read-out granularity of the scintillation light with MPPCs allows for a more precise reconstruction of shallow events. Less material budget before the LXe active region results in a 9% higher detection efficiency, as discussed in Sect. 6.6. The typical bias voltage is less than 100 V.

6.2.2 Requirements for the LXe detector MPPCs

There are several issues to be addressed concerning the detection of LXe scintillation light by MPPCs.

The first issue is the photon detection efficiency (PDE) for VUV light. There are two types of layer structures for the SiPM, p-silicon on an n-substrate (p-on-n) and n-on-p. In general, since the ionisation coefficient for electrons is higher than that for holes, the breakdown initiation probability of electrons is always higher than that of holes. Blue light is absorbed close to the SiPM surface and electrons initiate the avalanche breakdown in the p-on-n case, which results in a higher sensitivity in the blue light region. Our MPPC uses the p-on-n structure, which is suitable to detect the blue light. The PDE of standard MPPC for VUV light is, however, nearly zero since VUV photons can not reach the sensitive layer due to a protection coating layer made of epoxy resin or silicon rubber. Furthermore, an anti-reflection (AR) coating layer is not optimised to the refractive index of LXe at the scintillation light wavelength.

The second issue is the MPPC size. The current largest single MPPC commercially available is 6 \(\times \) 6 mm, which is still too small to cover the inner face of the LXe photon detector with an affordable number of read-out channels. It is desirable to develop a large-area MPPC with 10 \(\times \) 10 mm or larger. However, the larger size of MPPCs could cause a larger dark count rate, larger gain non-uniformity, and larger capacitance (longer tail in the waveform, larger noise etc.) [133].

A large-area UV-sensitive MPPC has been developed in collaboration with Hamamatsu Photonics to be used in the upgraded LXe photon detector. We will describe its characteristics in the following sections.

6.2.3 Photon detection efficiency

Many prototypes optimised for VUV detection have been produced by Hamamatsu Photonics, which have no protection coating, a thinner contact layer or optimised AR coating with different parameters.

An MPPC signal waveform (upper) and a PMT signal waveform (lower) for the same \(\alpha \)-event digitised at a sampling frequency of 700 MSPS

We succeeded in detecting the LXe scintillation light from \(\alpha \)-events by using one such prototype sample. Figure 60 shows signal waveforms from the MPPC sample (upper figure) and a UV-sensitive PMT (lower one) for the same \(\alpha \) event.

The number of detected photoelectrons for \(\alpha \)-events is calculated from the ratio of the observed charge to that obtained for a single photoelectron event. The PDE is then estimated from a ratio of the detected number of photoelectrons to the expected number of incoming scintillation photons from \(\alpha \)-events. This PDE still contains contributions from cross-talk, after-pulses, and the infrared component of the LXe scintillation light. The contribution from the infrared component is estimated to be \(\sim \)1% indirectly, by using the signal observed with a commercial MPPC (S10362-33-100C), which is supposed to be insensitive to the VUV component. The cross-talk + after-pulse components are estimated using a flashing LED in such a way that the MPPC detects less than 1 p.e. on average. The expected 1 p.e. probability (\(p_\mathrm {1~p.e. expected}\)) is calculated from the Poisson distribution with the mean estimated from the observed probability of 0 p.e. events. We can estimate the cross-talk + after-pulse probability by comparing this with the measured probability of 1 p.e. events (\(p_\mathrm {1~p.e.~measured}\)) [134]. This method yields a cross-talk + after-pulse probability = \((p_\mathrm {1~p.e. expected}-p_\mathrm {1~p.e. measured})/p_\mathrm {1~p.e. measured}\). of between 10–50%, depending on the over-voltage.

Figure 61 shows the measured PDEs for four MPPC samples after correcting for the contributions from cross-talk and after-pulses. There is roughly a 30% uncertainty in the PDE value, that is estimated from the variation of the PDE measured in different setups. The result shows that the PDE is higher than the 15% PDE measured in LXe, which is similar to the QE for the UV-sensitive PMT of the current detector (\(\sim \)16%). Since the sensor coverage on the inner face is increased by 50% using MPPCs, the total photoelectron statistics would be increased.

Poly-silicon was used in the previous versions of MPPCs as quenching resistors, but now metal resistors are more common. The resistivity of the poly-silicon increases when the temperature decreases, for example, the resistance at LXe temperature is measured to be more than a factor of two higher than at room temperature. A metal quenching resistor, which has 1/5 of the temperature coefficient of a poly-silicon resistor, is more suitable in our application in order to keep the quenching resistance low, which can avoid a long fall time in the MPPC signal.

The breakdown voltage of MPPCs is known to have a relatively large temperature coefficient (56 mV \(^{\circ }\hbox {C}^{-1}\)) and the gain and PDE can therefore easily shift depending on the temperature, influencing the stability of the detector performances.

The LXe temperature stability of the MEG LXe photon detector has been measured to be smaller than 0.15 K (RMS), most likely dominated by the precision of the temperature measuring device. The fluctuation of the MPPC gain at an over voltage around 7 V is expected to be smaller than 0.1% (RMS). The PDE at around 7 V over voltage is already saturated, and no fluctuations are expected from temperature variation. The voltage dependence of the cross-talk and the after-pulse of the MPPC should be smaller than 30%/V, which corresponds to 0.23%. These fluctuations are smaller than the expected energy resolution of the MEG II LXe photon detector described in Sect. 6.6.

6.2.5 Radiation hardness

Radiation produces defects in the silicon bulk or at the Si/SiO\(_{2}\) interface of SiPMs. As a result, some parameters of SiPMs such as the breakdown voltage, leakage current, dark count rate, gain, and PDE may change after irradiation. There have been many studies on the radiation hardness of SiPMs irradiated by photons, neutrons, protons, or electrons. These studies show the following.

An increased dark count rate was observed at more than \(10^{8}\) n/\(\hbox {cm}^{2}\), and loss of single p.e. detection capability was observed at more than \(10^{10}\,{\hbox {n}/\hbox {cm}^{2}}\) [136]. From the neutron flux measured in the MEG experimental area, the total neutron fluence is estimated to be less than \(1.6\times 10^{8}\,{\hbox {n}/\hbox {cm}^{2}}\) in MEG II. The dark count rate might become a factor of 2–3 higher, but since we operate the MPPCs at low temperature, this will not be problematic.

Increased leakage current was observed with a photon irradiation of 200 Gy [137], while the photon dose in the MEG II is estimated to be 0.6 Gy.

The radiation damage by photons, or neutrons should not be an issue for the MPPCs in MEG II.

6.2.6 Linearity

SiPMs show a non-linear response when the number of incident photons is comparable to or larger than the number of pixels of the device. The optimal condition is that the number of incident photons is much smaller than the number of pixels without any localisation. Figure 63 (top) shows the measured response functions for \({1 \times 1}\, \hbox {mm}^{2}\) SiPMs with different total numbers of pixels illuminated by a 40 ps laser pulse [138]. For the MEG II LXe detector, the expected number of photoelectrons reaches up to 12,000 p.e. on \(12 \times 12\, \hbox {mm}^{2}\) sensor area for very shallow signal events as shown in Fig. 63 (Bottom), which is only 20% of the total number of 57,600 pixels. Considering also that some of the fired pixels are recovered during the emission time of the scintillation light, the expected non-linearity is small and can be corrected for by a careful calibration.

(Top) Response functions for the SiPMs with different total pixel numbers measured for a 40 ps laser pulses [138]. (Bottom) The number of photoelectrons expected from a \(12 \times 12\, \hbox {mm}^{2}\) MPPC versus conversion depth in the MEG II MC simulation

6.2.7 Large area MPPC

The current largest MPPC (\(6 \times 6\, \hbox {mm}^{2}\)) is still too small for the MEG II LXe photon detector, and we need at least \(10 \times 10\, \hbox {mm}^{2}\) to replace the PMTs. For a larger size sensor, we have to pay attention to a possible increase in the dark count rate, an increase of the sensor capacitance, and gain non-uniformity over the sensor area.

The increase of the dark count rate is not an issue in MEG II due to the LXe temperature. To reduce the sensor capacitance, the large area of \(12 \times 12 \, \hbox {mm}^{2}\) is formed by connecting in series four smaller MPPCs (\({6 \times 6} \, \hbox {mm}^{2}\)) each. In this configuration, the decay constant of the signal waveform decreases from 130 ns for a single sensor to 40–50 ns for four sensors in series. To equalise the gain in the large area sensor, four MPPCs with similar breakdown voltage are selected.

Instead of a simple series connection, each sensor chip is decoupled with a capacitor to enable the bias voltage to be supplied via a parallel connection. In this way, we can still extract signals from the series connection, and the common bias voltage \(\sim 55\) V can be supplied to the four sensor chips.

6.3 Detector design

6.3.1 Design of sensor package and assembly

Figure 64 shows a design of the UV-enhanced MPPC package used for the MEG II LXe photon detector. Four sensor chips with a total active area of \({12 \times 12}\, \hbox {mm}^{2}\) are glued on a ceramic base of \({15 \times 15}\, \hbox {mm}^{2}\). The ceramic is chosen as a base material because the thermal expansion rate is close to that of silicon at LXe temperatures.

The sensor active area is covered with a thin high quality VUV-transparent quartz window for protection. The window is not hermetic; there is a gap between the sensor and the window in which LXe penetrates. Figure 65 shows the transmission efficiency of different window materials as a function of wavelength [139], showing that the transmittance of the synthetic silica, which is used in our MPPC, is \({\sim 75\%}\) for the LXe scintillation light (175 nm). The reflection loss is small since both sides of the quartz window touch LXe whose refractive index is close to that of the quartz window (\(n_\mathrm {LXe}=1.64\), \(n_\mathrm {quartz} = 1.60\)).

Transmission efficiency as a function of wavelength for high quality VUV-transparent quartz [139]

The MPPCs are mounted on a PCB strip as shown in Fig. 66. Each PCB strip has 22 MPPCs, and two PCBs are mounted in a line along the z-direction with 93 lines (186 strips) covering the \(\phi \)-direction on the inner wall of the detector cryostat as shown in Fig. 67. The number of MPPCs totals to 4092. One MPPC package has eight electrode pins (an anode and a cathode from each sensor chips) which are plugged into the corresponding sockets on the PCB. This mounting scheme allows easy replacement of the MPPC module if necessary. Additional capacitors and resistors are implemented on the PCB to realise the signal line in series and the bias line via parallel connection as described in Sect. 6.2.7.

The signals from the MPPCs are transmitted on the signal lines of the PCB which are designed to be well shielded from both outside and the adjacent channels and have a 50 \(\varOmega \) impedance. Similar PCBs are used in the feed-throughs of the cryostat as described in Sect. 6.3.3.

It is important to precisely align the PCB strips on the inner wall of the detector cryostat and to minimise the gap between the strips and the wall since LXe in this gap deteriorates the photon detection efficiency and causes an undesirable low energy tail in the energy response function of the detector. Figure 68 shows the inside of the LXe photon detector after the MPPCs and PMTs are mounted.

The inside of the LXe photon detector after the MPPCs and PMTs are assembled

6.3.2 Design of PMT support structures

Figure 69 shows the 3D CAD design of the PMT support structure. There is no support structure at the inner side, and only the outer part has screw holes to an arch-shaped support structure repaired by bolts. Joint brackets are used to fix two adjacent side slabs. The side and outer faces of the PMT support structure are re-used from the MEG LXe photon detector, while the top and bottom panels are modified to fit the larger number of PMTs, 73 instead of 54. In total, 668 PMTs are installed in five faces except for the inner one.

6.3.3 Signal transmission

The transmission of 4092 MPPC signals to the DAQ electronics without introducing noise or distortion is challenging. We have to pay attention to pickup noise, cross-talk, and limited space in the cryostat as well as the feed-throughs etc. In order to overcome such issues, we have developed a multi-layer PCB with coaxial-like signal line structure. It is used for both the PCBs for MPPC mounting and the vacuum feed-through of the cryostat.

A cross-sectional view of the PCB schematic drawing in which a signal line is shielded by surrounding ground lines and ground layers. The total thickness of the PCB board is 1.6 mm

As described in Sect. 6.3.1, 22 MPPCs are mounted on a PCB strip and signal lines embedded in the strip transmit signals to an end. The total length of the signal lines is about 35 cm, and the width of the PCB is 15 mm. MPPCs are plugged into socket pins on the PCB and 22 MMCX (micro-miniature coaxial) connectors are used at the end of the signal lines. The signal lines on the PCB strip are connected to (real) thin coaxial cables by means of connectors at the edge of PCBs. Then the signals are transmitted to feed-throughs using the thin coaxial cables, with a length of 2.5–4.9 m depending on their \(\phi \) positions. The coaxial cables (RG178-FEP) are produced by JYEBAO[140]. An MMCX connector is assembled on one end, and the other end is directly soldered on the feed-through PCB.

Figure 70 shows the layer structure of the PCB used as our feed-through PCB. Each signal line is surrounded by different ground patterns to minimise cross-talk and to shield from the outside. To avoid any ground loop, different ground patterns for different signal lines are separated. In total, six layers (two layers of signal, and four layers of ground) are used. The impedance of the buried micro-strip lines are adjusted to be 50 \(\varOmega \). The MEG LXe photon detector had in total 10 DN160CF flanges for the signal and HV cables of 846 PMTs. Since the number of readout channels has increased, more feed-through ports are necessary. A PCB-type feed-through shown in Fig. 71 similar to the PCB for MPPC mounting has been developed, which allows a high density signal transmission through vacuum walls and a low-noise environment. On both sides of a PCB, 72 cables are directly soldered, and six PCBs are glued by Stycast 2850 FT + Catalyst 24 LV into a DN160CF flange. In total, 10 DN160 CF flanges are used for MPPC signals (up to 4320 channels) and 2 flanges for PMT signals, while 4 flanges are used for PMT HV cables. The signal from the feed-through is transmitted to the readout electronics via 10 m long coaxial cables.

6.3.4 Read-out electronics

Both the PMTs and MPPCs signals are read out by WaveDREAM boards (WDBs). Amplifiers are mounted on the boards with switchable gain settings from 0.5 to 100 (see Sect. 8.2 in detail). The different gain stages can then be switched at any time. The higher gain mode is used to detect single photo-electrons for the calibration of the MPPCs, while the low gain mode is used to take physics data where a large dynamic range is needed.

No amplifier is installed between MPPC and WDB. The bias voltage for MPPC, which is typically 50 V, is supplied from the WDBs through the signal cable.

6.3.5 Cryogenics

The MEG LXe cryostat is re-used for the MEG II LXe photon detector. In order to cover the increase of the external heat inflow due to \({\sim 4000}\) extra signal cables for the MPPCs, the cooling power of the refrigerator is increased by adding another Gifford-McMahon (GM) refrigerator, model AL300 produced by CRYOMECH [141]. The new refrigerator will produce more than 400 W of cooling power which should be sufficient to cool the MEG II LXe photon detector.

Mean number of photoelectrons vs. \(\varDelta \phi \) for X-rays events fitted with a rectangular distribution smeared with error functions at the edges. The dashed lines show the boundaries of the neighbouring SiPMs

6.4 Calibration and monitoring

The LXe detector necessitates careful calibration and monitoring of the energy scale over its full energy range. That requires several methods that have already been introduced and commissioned in MEG and will be inherited by MEG II with some modifications to match the upgrade. They are listed in Table 7 and summarised in the following (see [1] for more details):

1.

The behaviour of the LXe photon detector is checked in the low-energy region using 4.4 MeV \(\gamma \)-rays from an AmBe source, placed in front of the inner face, and 5.5 MeV \(\alpha \)-particles from \(^{241}\)Am sources deposited on thin wires, mounted inside the active volume of the detector. The \(\alpha \)-signals are also used to evaluate and monitor in-situ the PMT quantum efficiencies (QEs) and measure the Xe optical properties on a daily basis. In addition, 9.0 MeV \(\gamma \)-rays from capture by \(^{58}\)Ni of thermalised neutrons produced by a neutron generator are also available. This is the only method which allows to check the response of the LXe photon detector with and without the particle flux associated with the muon beam and/or the other beams.

2.

The performance of the LXe photon detector in the intermediate-energy region is measured two/three times per week using a Cockcroft–Walton accelerator, by accelerating protons, in the energy range 400–1000 KeV, onto a Li\(_2\)B\(_4\)O\(_7\) target. \(\gamma \)-rays of 17.6 MeV energy from \(^7 \mathrm {Li} (\mathrm {p}, \gamma ) ^8 \mathrm {Be}\) are used to monitor the energy scale, resolution and the uniformity of the detector, while time-coincident 4.4 and 11.6 MeV \(\gamma \)-rays from \(^{11} \mathrm {B} (\mathrm {p}, \gamma \gamma ) ^{12} \mathrm {C}\) are used to inter-calibrate the relative timing of the LXe photon detector with the pTC detector.

The RMD can be used as well for calibration purposes with dedicated triggers. In particular the selection of the \(\mathrm {e}^+\gamma \) pair represents a strong quality check of the complete apparatus and a straightforward way to extract the global time resolution (the resolution of the timing difference between the positron and the photon) and the relative offset.

6.5 Alignment

Precise relative alignment of the photon detector and the positron magnetic spectrometer is important to ensure that the angular acceptance criteria for \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) signal events are not compromised. For example, a 5 mm error in the measured position of the photon in the LXe photon detector would result in a signal event possibly being missed because it would not be consistent with being emitted opposite to the direction of the positron. The relative alignment of the LXe photon detector and the spectrometer is implemented using optical survey techniques. For the LXe photon detector, the survey is complicated by the fact that the photo-sensors (SiPMs) are not visible once the LXe photon detector is closed and that their positions relative to the external survey markers change due to thermal contraction and buoyant forces as the LXe photon detector is cooled and filled with liquid xenon.

6.5.1 The X-ray alignment system for the LXe photon detector

A newly introduced technique will measure the position of each SiPM using the novel technique of X-ray imaging each sensor. The technique uses a well collimated and precisely aligned X-ray beam in the radial direction originating from the axis of the COBRA magnet (at \(x =y =0\) in the MEG coordinate system) at precisely known axial (z) and azimuthal (\(\phi \)) coordinates. The X-rays are collimated to produce a ribbon-like beam, narrow (\({\approx 10}\)% of the dimension of a SiPM at its face) in one dimension (\(\phi \) or z). The energy of the X-rays is chosen such that they penetrate the COBRA and LXe cryostats with significant probability, yet interact within \(\approx 1\,\mathrm{mm}\) of liquid xenon, primarily by photo-absorption. Scintillation light produced by the photo-electrons in the liquid xenon is detected by the SiPM directly in front of the interaction. The z-coordinate of each SiPM is deduced by orienting the narrow (1.5 mrad) beam dimension in the axial direction and then scanning it in that direction. The axial extent of a given SiPM is given by the axial extent of the X-ray beam position for which light is detected in that element. The \(\phi \)-coordinate is similarly determined by rotating the collimator so the beam is narrow in the azimuthal direction and scanning in azimuth.

The X-rays are produced by decay of a \(\mathrm {^{57}Co}\) source, producing X-ray lines at 122 keV (\({\approx 80}\)%) and 136 keV (\({\approx 10}\)%). They penetrate the COBRA magnet and the front of the LXe cryostat with \({\approx 30}\)% probability. We use a commercial point source with an activity of \(\approx 3\times 10^{10}\hbox { Bq}\) and collimate the beam to \({1.5 \times 50}\,\hbox {mrad}^{2}\) with a brass collimator. The z-coordinate of the origin of the beam and its \(\phi \)-direction are set using precise linear and rotary translation stages. The signal induced in the SiPM is about 30% of that induced in a single SiPM by a typical shower of a \(\sim 53\) MeV photon from \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) events. Data are collected by implementing a trigger on the signal detected in a limited number of SiPMs in the region to which the X-ray beam points.

The expected performance is studied in a Geant4 simulation of the X-ray beam and the MEG II detector. X-rays are generated in the beam solid angle, propagated through the COBRA cryostat and into the LXe. Scintillation light is produced from the electron produced by the X-ray interaction and the SiPM response is simulated. Figure 72 shows a plot of the average number of detected photoelectrons per interaction in a SiPM as a function of the difference \(\varDelta \phi \) between the \(\phi \)-coordinate of the beam with respect to the \(\phi \)-coordinate of the SiPM centre. Each bin contains \({\approx 90}\) detected X-rays, corresponding to an exposure time of \(\sim 2.5\hbox { s}\) per position. An approximate estimation of the precision with which the SiPM centre \(\phi \)-coordinate can be measured is obtained by fitting the distribution with a rectangular function smeared with error functions; the statistical uncertainty is \(\sigma _{\overline{\varDelta \phi }} \simeq 0.1\) mrad. Similar precision is obtained fitting the distribution with a Gaussian.

Systematic uncertainties in the position determination will be due to the uncertainty in our knowledge of the direction and origin of the X-ray beam. The position and angle alignment of the collimator is made by an optical survey to a precision of \(< 100\,\upmu \text {m}\) and 0.2 mrad. Variations in the beam direction as it moves along the translation stage (of the order of 0.5 mrad from the device specifications and our measurements) will be monitored with a laser attached to the translation stage and projected to a quadrant photodiode, as well as with a spirit-level on the translation stage. In addition, a cross-check of the optical survey of the cryostat and the X-ray beam is made by mounting small LYSO13 crystals scintillators and thin lead absorbers in well-surveyed positions just in front of the LXe cryostat. The X-ray beam should be detected in the LYSO detectors at the calculated X-ray beam \(\phi \)- and z-coordinates and the signal in the LXe should be shadowed at the calculated X-ray beam \(\phi \)- and z-coordinates of the thin lead absorbers.

Position resolution in the horizontal (top) and vertical (bottom) directions as a function of the first conversion depth. The resolutions in MEG are shown with red markers, and those in MEG II are shown with blue markers

6.6 Expected performance

The expected performance of the upgraded LXe photon detector is evaluated using a MC simulation.

6.6.1 Simulation model

A full MC simulation code based on Geant4 was developed to compare the performance of the MEG and the MEG II design. In the simulation, scintillation photon propagation is studied in a Geant4-based simulation. The reflection of scintillation photons on the MPPC surface was simulated using the complex refractive index of a pure silicon crystal. The reflectance is typically about 60%. In the simulation, the index-number of hit pixel and the arrival time of each scintillation photon are recorded. They are used to form avalanche distributions in each MPPC. The dark-noise, optical cross-talk, after-pulsing, saturation and recover are modelled based on real measurements and incorporated in the simulation. The waveform of the MPPC is simulated by convolving the single photo-electron pulse and the time distribution of avalanches. A simulated random electronics noise is added assuming the same noise level as for the MEG read-out electronics.

The event reconstruction analyses are basically the same as those for the MEG detector, while the parameters, such as waveform integration window and corrections for light collection efficiency depending on the conversion position, are optimised for the new design. The non-linear response of the MPPC due to pixel saturation (see Fig. 63), resulting in a non-linear energy response of the detector, is taken into account. However the effect on the energy reconstruction is negligible because the fraction of the total number of photoelectrons observed by each MPPC is small.

6.6.2 Simulation results of energy and time resolution

Figure 73 shows the position resolutions for signal photons as a function of the reconstructed conversion depth (w). In MEG, the position resolution is worse in the shallow depth part than in the deeper part because of the PMT size. The position resolution in the shallow part is much better in MEG II due to smaller size of the photo-sensors.

The energy resolution is also much better in the shallow part with the MEG II design than that of MEG as shown from the probability density function (PDF) for \(E_{\mathrm {\gamma }}=52.83\,\text {MeV}\) photons in Fig. 74 mainly due to a more uniform photon collection efficiency. The low energy tail is smaller because of the lower energy leakage at the acceptance edge with the improved layout of the lateral PMTs. The resolution is also better in the deeper part because of the modification of the angle of the lateral PMTs.

The measured energy resolution of MEG (1.7% for \(w>{2}{\hbox { cm}}\)) was worse than that in the simulation (1.0% for \(w>{2}{\hbox { cm}}\)). The reason is not fully understood, while the source of the difference could be related to the behaviour of the PMTs (e.g. gain stability, angular dependence and so on) or the optical properties of liquid xenon (e.g. effect of convection). In the former case, the difference can become smaller in the upgraded configuration. On the other hand, in the latter case, the difference could remain. Figure 75 shows the energy response under different assumptions:

1.

the additional fluctuation completely vanishes in MEG II,

2.

a part of fluctuation remains which corresponds to 1.2% resolution in MEG (the resolution achieved with the MEG LXe large prototype detector),

3.

the fluctuation remains making the resolution of MEG 1.7%.

We will use the assumption 2 for the sensitivity calculation in Sect. 9.

The time resolution of the LXe photon detector \(\sigma _{t_{\mathrm {\gamma }}}\) can be separated into six components; the transit time spread (TTS) of the photo-sensors, the statistical fluctuation of scintillation photons, the timing jitter of the read-out electronics, the electronics noise, the resolution of the photon conversion point and the finite size and the fluctuation of the energy deposits in the LXe. Most of these are common to both MEG and MEG II, but the effect from the TTS and electronics noise are different because of the different photo-sensors. The effect of the TTS is negligible because it depends on the inverse of the number of photoelectrons, and the light output of liquid xenon is large. The effect of the electronics is larger in MEG II than in MEG because the leading time of an MPPC pulse for liquid xenon scintillation signal is slower than that of a PMT pulse. In order to estimate the effect, the time resolution of the upgraded detector for signal photons is determined in the simulation. The evaluated time resolution with preliminary waveform and reconstruction algorithms is \(\sigma _{t_{\mathrm {\gamma }}}\simeq {50}\hbox { ps}\) assuming an electronic noise level up to 1 mV. The main improvements come from the better time of flight estimate, deriving from the better position reconstruction, and higher photon statistics. Since parameters such as the rise time of the waveform and the noise components may not be correctly considered in the simulation, the time resolution in the worse case might still be at the MEG level, hence a conservative estimation is \(\sigma _{t_{\mathrm {\gamma }}}\sim \) 50 to 70 ps.

6.6.3 High intensity

The higher background photon rate due to the higher muon intensity in MEG II should not be a problem for the photo-sensor operation. On the other hand, the background rate in the analysis photon energy region would be increased due to pile-up. In the MEG analysis, the energies of pile-up photons are unfolded using the waveform and light distribution on the inner face.

In 2011 we took data with the MEG LXe detector at different beam intensities: 1.0, 3.0, and \(8.0\times 10^{7}\,\upmu ^{+}\)/s. Figure 76 shows the photon spectrum normalised to the number of events from 48 to 58 MeV; the scaling factors are consistent with the muon stopping rate on the target. The shapes of spectra are almost identical in the analysis region after subtracting the energies of pile-up photons. Since the same analysis can be used also for the MEG II upgraded detector, a higher beam rate is not expected to cause an additional background rate due to pile-up.

Reconstructed energy spectrum obtained for different beam intensities. The horizontal axis shows energies without unfolding pile-up photons (a) and the same after unfolding and subtracting the energy of pile-up photons (b). Blue, black and red lines show the spectrum at muon stopping rates of 1.0, 3.0, and \(8\times 10^{7}\,\mu ^{+}\)/s, respectively. The spectra are normalised by the number of events in the range 48–58 MeV; the scaling factors are consistent with the muon stopping rate on the target. A difference in the low energy part below \(45\) MeV is due to different effective trigger thresholds; a difference in the high energy part above \(60\) MeV is due to the different ratio between the photons backgrounds and the cosmic ray background a before unfolding pile-up photons. b After unfolding pile-up photons

7 Radiative Decay Counter

The Radiative Decay Counter (RDC) is an additional detector to be installed in MEG II. It is capable of identifying a fraction of the low-energy positrons from RMD decays having photon energies close to the kinematic limit, which are the dominant source of photons for the accidental coincidence background. This section describes the concept and the design of the detector, as well as the results of the pilot run and the expected performances.

7.1 Identification of the RMD photon background

As mentioned in Sect. 1, RMD and accidental coincidences are the backgrounds in \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) search. In the case of the accidental background, which is dominant in MEG II, photons are produced from either RMD or positron AIF. Figure 77 shows the fraction of background photons expected in MEG and MEG II from different sources. The AIF background decreases in MEG II thanks to the reduced mass of the CDHC compared with the MEG drift chambers and it is possible to decrease it further by looking for a disappearing positron track in the analysis. On the other hand, the RMD photon background does not change. Therefore, it is important to identify these events. According to simulations, the RDC can detect \({\sim 42}\)% of the RMD photon background events (\(E_{\mathrm {\gamma }}>48\,\text {MeV}\)), (the product of the fraction of positrons going downstream (\({\sim 48}\)%) and the RDC positron detection efficiency (\({\sim 88}\)%, see Table 8)) thus improving the sensitivity of the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) search by 15%.

The RDC will be installed downstream the \(\mathrm {\mu ^+}\) stopping target as shown in Fig. 78. A fraction of the RMD events can be identified by tagging a low-energy positron in time coincidence with the detection of a high energy photon in the LXe detector. This low-energy positron of 1–5 MeV (with \(E_{\mathrm {\gamma }}>48\,\text {MeV}\)) follows an almost helical trajectory with small radius around the B-field lines. Therefore, it can be seen by a small detector with a radius of only \(\sim \) 10 cm, placed on the beam axis. There is an option to install a detector also upstream, as described in Sect. 7.6.

7.2 Detector design

The red histogram in Fig. 79 shows the expected distribution of the time difference between RDC and the LXe photon detector for accidental background events (with photons from RMD or AIF), while the blue histogram is the distribution due to \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) signal events. The peak in the red histogram corresponds to the RMD events, while the flat region in both histograms corresponds to background Michel positrons. As the detector is placed on the beam-axis, there are many background Michel positrons (\({{\sim }10^{7}}\hbox {e}^{+}/\mathrm{s}\)). They can be distinguished from RMD positrons by measuring their energy since they typically have higher energies as shown in Fig. 80. Hence, the RDC consists of fast plastic scintillator bars (PS) for timing and a LYSO crystal calorimeter for energy measurements.

Expected energy distribution at the RDC for RMD events with \(E_{\mathrm {\gamma }}> 48\,\text {MeV}\) (red) and for the Michel events (blue)

Figure 81 shows a schematic view of the RDC detector: 12 plastic scintillator bars in the front detect the timing of the positrons, and 76 LYSO crystals behind are the calorimeter for energy measurement. In order to distinguish RMD positrons from Michel ones, both the PS and the LYSO calorimeter are finely segmented. Because the background rate is larger close to the beam axis, the width of the PS in the central region is 1 cm while it is 2 cm at the outer part. The size of each LYSO crystal is \(2 \times 2 \times 2 \hbox { cm}^{3}\).

Schematic view of the RDC. The horizontal long plates in front are the plastic scintillator bars, and the cubes behind are the LYSO crystals

The PS shown in Fig. 82 consists of plastic scintillators read out by SiPMs. The design of the PS is very similar to that of the pTC (Sect. 5). In order to have good timing resolution, scintillators must have a high light yield and short rise time. BC-418 from Saint-Gobain [119] was selected as it satisfies these requirements. The scintillation light is read out by SiPMs at both ends of each scintillator. SiPMs are compact and operate in high magnetic fields, and so are suitable for the RDC having many readout channels in a limited space. The MPPC S13360-3050PE from Hamamatsu Photonics [143] was selected for the SiPM for PS because of its high gain and high photon detection efficiency. In order to detect as much scintillation light as possible, multiple SiPMs (two for the central part and three for the outer part) are attached to both ends of the scintillators. The SiPMs are connected in series on the readout printed circuit boards (PCBs) to reduce the number of readout channels and to reduce the rise time of the signal due to the reduced capacitance. They are glued to the scintillators by optical cement. Each scintillator is wrapped with a \(65\,\upmu \text {m}\) thick reflective sheet (ESR from 3M) to increase light yield and to provide optical separation as well as black sheets of Tedlar for light shielding.

Plastic scintillator bars of the RDC. SiPMs are connected to the scintillator bars at both ends

The calorimeter is made of 76 LYSO crystals (Shanghai Institute of Ceramics). LYSO crystals have a high light yield (\(3 \times 10^{4}\) photon/MeV) and a short decay time constant (42 ns). These characteristics are suitable for the measurement of positron energy in a high rate environment. LYSO contains the radio isotope \(^{176}\)Lu, which decays to \(^{176}\)Hf with emission of a \(\beta ^-\) (with end-point energy of 596 keV and half life of \(3.78\times 10^{10}\) years), followed by a cascade of 307, 202 and 88 keV \(\gamma \)-rays. As described in Sect. 7.4, this intrinsic radioactivity can be used for an energy calibration. The decay rate per crystal is measured to be small (\(\sim 2\hbox { kHz}\)), therefore not affecting the detection of positrons from RMD. Each LYSO crystal is connected to one SiPM at the downstream side (see Fig. 83). A SiPM with a small pixel size of \(25\,\upmu \text {m}\) (S12572-025 from Hamamatsu Photonics [144]) was selected as it has good linearity for high intensity of incident scintillation light. The SiPM has spring-loaded contact to the crystal, using optical grease, instead of being glued. Therefore, it is possible to replace the SiPM or the crystal.

7.3 Tests and construction in the laboratory

The characteristics of each SiPM for the PS are measured before construction. The breakdown voltage is obtained for each SiPM from the measurement of the current–voltage response curve. SiPMs with the breakdown voltages close to each other are grouped together and connected in series. After the construction of the PS, the timing resolution of each counter is measured to be less than 90 ps by using a \(^{90}\)Sr source.

The LYSO crystals are also tested individually. We measured the light yield and the energy resolution of all the crystals by using a \(^{60}\)Co source. The energy resolution was measured to be \({\sim 6}\)% at \(E_{\mathrm {\gamma }}= 1\,\text {MeV}\) for all the crystals. In a high rate environment, energy resolution can be worsened by the “afterglow” effect of LYSO. Afterglow is a delayed light emission of crystals with very long time constant (typically few hours). This effect was studied by exposing the crystals to a \(^{90}\)Sr source. The increase of the current due to afterglow was measured with the SiPMs attached to the crystals. According to this measurement, the expected increase of the current in the MEG II beam environment is estimated to be \(\sim 10\, \upmu \hbox {A}\) at maximum. The influence on the energy resolution is expected to be less than 1% at \(E_{\mathrm {\gamma }}= 1\,\text {MeV}\).

The support structures of the PS and of the LYSO calorimeter are constructed with non-magnetic materials such as aluminium. The front part of the PS is not covered with metal, in order to minimise the amount of material. In order to absorb the stress of the springs (\(\sim 2.5\) kg in total) with the minimum amount of material, a 3.3 mm Rohacell plate sandwiched with two CFRP (Carbon Fibre Reinforced Polymer) plates (0.2 mm each) is inserted between the PS and the LYSO calorimeter. In addition, a 0.1 mm thin aluminium plate is inserted for better light shielding. The back side of the crystals is covered by two Delrin® plates and one CFRP plate.

Figure 84 shows the RDC mounted on a moving arm system attached to the end-cap of the COBRA magnet. The RDC can be remotely moved away from the beam-axis when the calibration target for the LXe photon detector is inserted from the downstream side. The moving arm is controlled by water pistons made of plastic, which work in a magnetic field. The supporting mechanics are made of aluminium except for the titanium shaft, which works under heavy loads. The end-cap of COBRA separates the inner volume (filled with helium) from the outside. SiPM signals are transmitted through the end-cap by using feed-through PCBs attached to the end-cap. The design of the feed-through is essentially the same as used for the LXe photon detector (see Sect. 6.3.3).

7.4 Pilot run with a muon beam

The full detector system was tested in the \(\pi \)E5 beam line at PSI with a beam intensity of \({\sim } 1\times 10^{8}\,\upmu ^{+}\)/s.

The RDC was mounted at the downstream end of the COBRA magnet. For the detection of photons from RMD, a BGO detector consisting of 16 crystals (\(4.6 \times 4.6 \times 20\, \hbox {cm}^{3}\) each) was used as a substitute of the LXe photon detector. RMD events were acquired by requiring an energy deposit larger than \(\sim 35\) MeV in the the BGO. After event selection to reject cosmic rays, \({\sim 15{,}000}\) events remained. The distribution of the time difference between the BGO hit and the PS hit is shown in Fig. 85. A clear peak corresponding to RMD events is successfully observed.

Time difference of the RDC PS and BGO hits, from the beam test. Black (red) histogram shows the distribution before (after) applying a cut to the energy deposit in the LYSO calorimeter

Figure 86 shows the measured distribution of the energy loss in the LYSO calorimeter. Higher energy tail events are mainly Michel positron backgrounds, while the low energy part (\({<}5\) MeV) corresponds to RMD. For a demonstration, we applied an event selection to reject events with an energy release in the calorimeter above \(4\) MeV. The red histogram in Fig. 85 shows the timing distribution after the calorimeter LYSO energy cut. The flat region which corresponds to backgrounds is reduced to \({\sim 1/10}\) by this cut. The peak region (i.e. RMD events) is also reduced to \({\sim 1/3}\) because the energy threshold for the BGO trigger was low and therefore the energy of the RMD positron could be high.

7.5 Expected performance

The sensitivity of the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) search in MEG II including the RDC is calculated by using the expected timing difference distribution of the RDC and LXe photon detectors (see Fig. 79) and the expected energy distribution in the LYSO calorimeter (see Fig. 80) (see Sect. 9 for the details). In the MEG physics analysis [2, 145, 146], the likelihood depends on the number of events (signal and background) and the probability density functions (PDF) based on the energy, timing and relative angles of positron and photon. The RDC observables can be added in the likelihood analysis by using the PDFs of the PS–LXe timing difference and of the LYSO calorimeter energy. Table 8 summarises the performances of the RDC assumed in the calculation. By using the RDC, the sensitivity of MEG II is expected to improve by 15%.

Table 8

Performances of the RDC assumed in the sensitivity calculation. RMD acceptance is the probability to detect RMD positrons going downstream, for \(E_{\mathrm {\gamma }}> 48\,\text {MeV}\). RMD detection efficiency is the probability of detecting a positron falling in the geometrical acceptance range. Accidental probability is the probability of observing a Michel positrons in the RDC uncorrelated to the photon at \(R_\mathrm {\mu ^+}= 7 \times 10^{7}\,\hbox {s}^{-1}\)

Parameter

Value

LYSO energy threshold

30 keV

RMD detection efficiency

100%

LYSO energy resolution

8%

Time resolution

100 ps

Accidental probability

9%

RMD acceptance

88%

7.6 Further background reduction with an upstream RDC

Because half of the positrons from RMD go upstream, it is possible to further improve the sensitivity by adding an additional RDC in the beam line upstream the muon stopping target, near the end of the COBRA magnet. The upstream RDC has to be very different from the downstream RDC as it must be placed on the beam path. First of all, the material thickness must be small enough to minimise the impact on the beam which prevents the use of a calorimeter. Secondly, the detector must be able to distinguish the RMD positrons from beam muons. This could be possible with a fast response, finely segmented detector.

A possible candidate is a layer of scintillation fibres with SiPM readout. Fibres can be bundled at both ends to reduce the number of readout channels. A fibre candidate is BCF-12 (Saint-Gobain [92]), a double-clad square shaped fibre \(250\,\upmu \text {m}\) wide. With this thickness, the effect to the muon beam optics is expected to be negligibly small. However, radiation damage on fibres and pile-up of the beam muon signals (after-pulse of SiPMs increases the pile-up probability) may affect the detector performance.

Another candidate is a synthetic diamond detector. Diamond detectors have fast signal, and can be manufactured in a thin layer. They are also known to be radiation hard. The drawback is their low signals, which requires high gain amplifiers with low noise.

The estimated improvement of the sensitivity with the upstream RDC is 10% when the detection efficiency is 100%.

8 Trigger and DAQ

This section describes an innovative integrated trigger and data acquisition system designed for the MEG II detector. After a description of the main requirements, the designed circuit characteristics and their interplay are described. We conclude with the latest results from the research and development phase.

8.1 Requirements

The MEG II sensitivity goal requires a substantial detector and read-out electronics redesign to deal with a factor of two increase in muon stopping rate with respect to MEG. As a consequence we replaced many of the PMTs of the LXe and timing counter detectors with SiPMs and MPPCs; similarly the new CDCH design requires more read-out channels compared to the MEG drift chambers. In summation this has led to an almost tripling of read-out channels with respect to MEG. The requirement for an efficient offline pile-up reconstruction and rejection is the availability of full waveform information; thus the DAQ waveform digitiser has to provide state-of-the-art time and charge resolution and a sampling speed in the GSPS range.

In addition, SiPMs have a lower gain than PMTs and require electronic signal amplification. Using SiPMs in LXe prevents us from placing preamplifiers directly next to the photo-sensors because of cooling problems; it is therefore mandatory that the new electronics contains flexible amplification stages for small signals (single photo-electrons for calibration) as well as large signals (\(\gamma \)-showers).

As shown in Fig. 87, the detector signals in MEG were actively split and then sent to the dedicated VME-based trigger and DAQ systems; the limited space for the electronics in the experimental area prevents us from adopting such a scheme with the increased number of channels expected in MEG II.

Comparison of the old (left) vs. the new (right) TDAQ electronics designs; the active splitter system present in the old version is integrated, together with the “Type1” and “DRS Board” functionalities in the WaveDREAM board, making the MEG II TDAQ system extremely compact

Simplified schematics of the WaveDREAM board. It contains 16 variable gain input amplifiers, two DRS4 chips, 16 ADC channels and a Spartan 6 FPGA. A optional high voltage generator for SiPM biasing can be mounted as a piggy-back board

8.2 The WaveDREAM board

The new system integrates the basic trigger and DAQ (TDAQ) functionalities onto the same electronics board, the WaveDREAM board (WDB). A simplified schematics of the WDB is shown in Fig. 88.

It contains 16 channels with variable gain amplification and flexible shaping through a programmable pole-zero cancellation. Switchable gain-10 amplifiers and programmable attenuators allow an overall input gain from 0.5 to 100 in steps of two. A multiplexer can be used to send one input signal to two channels simultaneously which can be set at different gains, at the expense of only having 8 channels per board. Two DRS4 chips [147] are connected to two 8-channel ADCs, which are read out by a Field-Programmable Gate Array (FPGA). In normal operation, the DRS4 chips work in “transparent mode” , where they sample the input signals continuously at a speed up to 5 GSPS in an analogue ring buffer. At the same time, a copy of the input signal is sent to the DRS4 output, where it is digitised continuously by the ADCs at 80 MSPS with a resolution of 12 bit.

The output stream of the ADCs is used in the FPGA to perform complex trigger algorithms such as a threshold cut on the sum of all input channels. Interpolation of the ADC samples via look-up tables allows time coincidence decisions with resolutions of a few nanoseconds to be made, much less than the ADC sampling speed. In case a trigger occurs, the DRS4 chip is stopped and the internal 1024-cell analogue memory is digitised through the same ADCs previously used for the trigger. With this technique, both complex triggering and high speed waveform sampling is possible on the same board.

The SiPMs of the MEG II experiment require bias voltages in the range of 30–60 V. Some detectors use six SiPMs in series, which requires a maximum voltage up to 240 V. This voltage can be supplied through the signal cables with capacitive de-coupling of the signal into the amplifiers as shown in Fig. 88. An ultra-low noise bias voltage generator has been designed to accommodate these needs. A Cockcroft–Walton (CW) stage (also known as Greinacher multiplier) generates a high voltage output of 24 V at a switching frequency of 1 MHz (see Fig. 89).

A Proportional-Integral (PI) regulator keeps the output voltage stable by comparing it through a voltage divider with a demand voltage given by a DAC. An elaborate low pass filter reduces the output ripple to below 0.1 mV, so that it cannot be seen, even with an amplifier gain of 100, at the input of the WDB. Since SiPMs require slightly different bias voltages, a simple 5 V DAC “sitting” at the high voltage potential can add between 0 V to 5 V to the output voltage on a channel-by-channel basis (see Fig. 90).

The 5 V DAC and the ADC for current measurements are placed at a high voltage ground defined by the CW generator. An isolated DC–DC converter generates, together with a low drop-out (LDO) regulator, the 5 V power supply voltage required by the DAC and the ADC. They are interfaced through a SPI bus via a digital isolator. A separate 24 bit ADC measures the CW voltage through a precision voltage divider.

At a CW voltage of 58 V for example, output voltages from 58 to 63 V can be generated for each channel, which is sufficient to accommodate variations between different SiPMs. The output current is measured via shunt resistors and a 16 bit ADC with differential inputs. The voltage drop across the shunt resistor is measured and converted into a current by the control software running on the soft core processor in the FPGA. The DAC is adjusted according to the voltage drop to keep the output voltage stable independent of the current, while high voltage CMOS switches (IXYS CPC7514) are used to turn off individual channels. Different CW generators have been developed for different output voltages and powers, reaching up to 240 V and 50 mA. Alternatively, a single high voltage can be distributed throughout the crate backplane, reducing costs by eliminating individual CW generators for each WDB.

Using this scheme, a cost effective and highly precise bias generator has been realised. The absolute voltage accuracy (as measured with an external multimeter) is below 1 mV at a maximum current of 2.5 mA. The current measurement has a resolution of 1 nA at a full range of 50 \(\upmu \hbox {A}\) with an accuracy of 0.1%. The high voltage bias generator is implemented as an optional piggy-back PCB placed on top of the WDB (see Fig. 91), so it can be omitted for channels which do not need biasing (such as PMT channels which have a separate high voltage supply), thus reducing costs.

Two WaveDREAM boards without (top) and with a high-voltage piggy-back board (bottom)

The WDB can be used in stand-alone mode, where it is read out through Gigabit Ethernet and powered through Power-over-Ethernet (PoE+). For MEG II, it has been decided to house 16 boards in a compact crate. This crate requires Gbit links for the simultaneous read-out of waveform and trigger data, a common high voltage for the SiPM biasing, an integrated trigger distribution and an ultra-low jitter clock with a few picoseconds phase precision. Since such a crate is not available on the market, a new standard has been developed. The WaveDAQ crate is a 3 HE 19” crate with \(16+2\) slots and a custom backplane as can be seen in Fig. 92. The Crate Management Board (CMB) contains the 220 V power supply together with a shelf management unit and is placed to the right side of the crate. The power supply generates a 24 V crate power of 350 W and a 5 V standby power for the shelf manager. Cooling is achieved by fans on the rear-side blowing air from the back to the front, where it exits through holes in the various boards. This topology allows stacking of crates directly on top of each other, making the whole system very compact.

The CMB contains an 8 bit micro-controller programmed in the C-language. It is connected to a dedicated Ethernet network for remote control and monitoring, and has a LED display and buttons for local control. Current and temperature sensors reflect the state of the crate, and each of the 18 slots can be powered on and off individually. The micro-controller is connected to all slots via a Serial Peripheral Interface (SPI) bus. This allows detection of individual boards in each slot, communication with all WDBs as well as remote firmware updates through the backplane. A physical select line for each slot allows geographic addressing as in the “good old CAMAC days”.

The WaveDAQ crate contains 16 slots for WDBs, which provide 256 input channels. The flexibility of the WDBs allows the readout of SiPMs, PMTs and drift chamber channels. The MEG II experiment will use a total of 37 such crates for the data acquisition of all detectors. The global trigger as well as the trigger and clock distribution is also housed in WaveDAQ crates, increasing the total number to 39.

In addition, the WaveDAQ crate contains two slots for so-called “concentrator” boards. The trigger concentrator board (TCB) receives 8 Gbit serial links from each WDB and is described later. The data concentrator board (DCB) received two separate Gbit serial links from each WDB for waveform readout. The dual star topology allows the operation of both the trigger and the DAQ system simultaneously without interference.

An integrated trigger bus allows the distribution of trigger signals through the backplane. Busy signals from each slot are connected via a “wired-or” and are used to re-arm the trigger after an event. A low jitter clock with skew corrected PCB traces is distributed through the backplane. Measurements show a slot-to-slot variation below 50 ps and a jitter below 5 ps. All backplane communication signals except the busy line use the LVDS standard.

Each WDB supports hot-swap functionality. During hot insertion, an inrush current controller ramps up the board’s capacitors gently, avoiding connector sparks and backplane power supply glitches. A switch at the handle latch switches off the internal power before the board is extracted.

8.3 Data read-out: the data concentrator board

The DCB is responsible for the configuration of all boards inside the crate through the SPI links, the distribution of the master clock and trigger signals, the readout of waveform data from each slot through dedicated serial links, the merging and formatting of the data, and the interface to the global DAQ computers through Gigabit Ethernet. It uses a Xilinx Zynq-7030 chip which contains a dual-core ARM Cortex-A9 processor embedded in the FPGA fabric and running at 1 GHz. This chip is complemented with a SD card to store the Linux operating system, 512 MB of DDR3 RAM, and a Small Form-Factor Pluggable (SFP) transceiver for 1 or 10 Gbit/s Ethernet. A dedicated clock distributor with integrated jitter cleaner (LMK03000 [148]) receives an internal or external clock and distributes it through the backplane to all slots via a star topology.

A dedicated front-end program runs on the ARM processors which collects waveform data from all 16 WDBs, merges them into one event, and sends it to the central DAQ computers via Gigabit Ethernet (optional 10 Gbit). The event format is compatible to the MIDAS DAQ system used in MEG II. In addition, the front-end program configures and monitors all WDBs and the TCB through the SPI links. It can communicate to the CMB to reboot individual slots in case of problems or firmware upgrades.

8.4 Trigger processing: trigger concentrator board

The trigger processing includes suppressing the background by almost six orders of magnitude resulting in an acquisition rate of about 10 Hz. The real time reconstruction algorithms rely on the fast response detectors: the LXe detector for the photon observables and the pTC for the positron ones. The ionisation drift time in the CDCH cells prevents the trigger system from using any information from the track reconstruction for trigger level 0.

Event selection relies on an on-line reconstruction of decay product observables, such as momenta, relative timing, and direction. Logic equations are mapped in FPGA cells and executed at 80 MHz so as to be synchronous with the FADC data flow. An estimate of the photon energy is obtained by the linear sum of pedestal-subtracted signal amplitudes of LXe photo-sensors, each weighted according to their own gain, which is efficiently implemented by using digital signal processor (DSP) units in the FPGA. An increased ADC resolution (12 vs. 10 bit) coupled with the improved single photoelectron response of the new sensors will allow to achieve a resolution better than that of MEG (7% FWHM at the signal energy \(E_{\mathrm {\gamma }}= 52.8\,\text {MeV}\)), though the final resolution will depend on running conditions. Concerning the relative timing, this will benefit from using WDB comparators coupled to each input signal (on both the LXe detector and pTC), whose latch time can be further refined by implementing look-up tables on the FPGA to correct for time-walk effects. Also in this case we expect the resolution to be significantly improved from the 3 ns achieved in MEG; some results of the expected online time resolutions are reported in Sect. 8.7. Moreover, the enhanced imaging capability due to the finer detector segmentation (smaller LXe photo-sensors and pTC counters) permits tighter angular constraints on the decay kinematics.

The boards designed for the online data processing are called TCB. A TCB gathers the information from a lower level trigger board, which could be a WDB via back-plane connections or another TCB which could be in another or in the same crate. In the first case the connection is provided via the back-plane in the second by a cable connected on the front panel. In order to minimise design and production costs, we decided to use the same 12-layer layout for all TCBs, independent of the role each one plays in the trigger hierarchy. TCBs differ from each other by the firmware operating on an on-board Xilinx Kintex7 FPGA [149]. Apart from reconstruction algorithms, which depend on individual sub-detectors, other features might depend on the slot assignment. For instance, the direction of I/O data lines is set from the back-plane to the FPGA if the TCB is located at the centre of the crate (Master position in all the crates), while it is the other way round for higher level TCBs hosted in a Slave position in the trigger crate, the two configuration are shown in Fig. 93.

TCB configured as slave (top) and master (bottom); in case of a slave board the data-flow is from the left (front panel) to the right (back plane) and vice versa for a master

8.5 System synchronisation: ancillary board

The main task of the Ancillary system is to provide the TDAQ boards with an ultra-low jitter clock signal to be used as the experiment time reference. We selected a low jitter 80 MHz oscillator [150] and a low jitter fan-out from Maxim [151], as a result we measure an overall jitter better than 10 ps at the WaveDREAM input. The clock distribution is arranged on a master-to-slave fan-out and implemented on a board, the Ancillary board, which can be configured as both master or slave: as a Master it generates the low jitter clock signal and receives the control signals, such as the trigger and synchronisation pulses, from the master TCB and forwards them to all the other TDAQ modules through the Slave modules, the link is provided by the backplane. The other way round the busy signal is distributed from the DAQ crates to the trigger crates and used as a veto for any trigger signal generation.

8.6 Slow control

Each experiment has quantities that must be monitored or controlled “slowly” . Examples are temperatures, power supply currents, and environmental values such as humidity and pressure. This is the task of the slow control system. MEG II relies on the Midas Slow Control Bus (MSCB) which has been successfully used in the MEG experiment over the past decade. It uses the RS-485 standard for communication and a set of optimised commands [152] for effective and quick exchange of data representing physical values.

The MSCB protocol has been implemented in the CMB, which allows the control and monitoring of the WaveDAQ crates directly from the MEG II slow control system. In addition, an MSCB communication line has been added to the WaveDAQ crate backplane, so the CMB can forward any MSCB command to individual slots in the crate. Each WDB implements an MSCB core for the control and monitoring of the bias high voltage for each channel. This core is implemented in the FPGA soft-core processor (Xilinx MicroBlaze), and connected to the DACs and ADCs of the high voltage piggy back board. Individual channels can be switched on and off, demand voltages can be set and currents can be read back through the slow control system.

In addition, a connector has been placed on the front panel of the WDB, which implements the 1-Wire® bus system [153]. This system allows the connection of virtually any number of sensors to a single line. Each sensor has a unique address under which it can be accessed. In addition to the serial communication, also the sensor power is delivered through the same line, hence the name 1-Wire. This scheme allows each of the 16 SiPMs connected to each WDB to be equipped with an individual temperature sensor. All 16 sensors are connected to this 1-Wire bus and are accessible by the bias voltage control program inside the FPGA and the MSCB slow controls system. This allows the implementation of an algorithm which adjusts the bias voltage of each SiPM to keep the breakdown voltage and therefore the gain constant even with temperature drifts.

8.7 Performance

The TDAQ efficiency, defined as the product of the trigger efficiency to select candidate signal events and the experiment live-time fraction, affects the experiment sensitivity (cf. Eq. (1)).

The read-out scheme guarantees a data transfer dead time of about 1 ms leading to a possible trigger rate of about 100 Hz with irrelevant dead time, such a value is however not sustainable by the offline infrastructure since the overall data size would increase by much more than a factor 10 with respect to MEG. As a consequence a maximum trigger rate of about 10 Hz, associated with an online selection efficiency close to unity is sought.

In order to accomplish this task the online event reconstruction algorithms have to be refined as described in Sect. 8.4, in particular for \(E_{\mathrm {\gamma }}\); the trigger resolution on the photon energy reconstruction was estimated by using the MC generated events reconstructed with an emulator on the FPGA firmware (FW) written in C++. The projected resolution is more than a factor 2 better than in MEG, \(\sigma _{E_{\mathrm {\gamma }}}/E_{\mathrm {\gamma }}= 1.5\%\) at \(45\) MeV (it was 3.5% in MEG).

The improved resolution will allow an increase in the online \(E_{\mathrm {\gamma }}\)-threshold without loss of efficiency in the analysis region, i.e. over \(48\) MeV, as reported in Fig. 94. The results indicate that we will be able to increase the online threshold by at least \(2\) MeV, from 42 to \(45\) MeV, leading to a trigger rate reduction of about a factor 2.

\({E_{\mathrm {\gamma }}}\)-spectrum in the LXe photon detector shown in black and the effective spectra by applying an online threshold at \(42\) MeV with a relative resolution of 3.5% (blue) and \(45\) MeV with a resolution of 1.5% (red)

The online time measurement will be extracted by sampling the WDB discriminator output at 800 MHz and intercepting the first sample over threshold, all the TDCs will be relatively synchronised by the clock signal distributed by the ancillary system. The intrinsic resolution of this TDC is the clock period divided by \(\sqrt{12}\), being \(\approx 350\) ps.

The method was tested during a beam test with a pTC prototype at PSI. The time resolution was measured by comparing the measured times of two adjacent pixels, where the transit time spread of the positron along that path is \(\approx 50\) ps, much lower that the expected resolution. The measured time resolution on a single pixel is \(\approx 500\) ps, close to the intrinsic limit; the origin of the difference has been studied and found to be due to two main factors: time walk on the discriminator and electronics jitter on FPGA processing. The former factor will be corrected in the final system and we estimate reaching a single channel time resolution of about 450 ps.

The online time resolution of the positron–photon coincidence is then expected to be better than 1 ns, more than a factor 3 better than in MEG. Figure 95 shows the effective trigger coincidence window for MEG II superimposed on that of the MEG. Thanks to the improved time resolution it will be possible a substantial reduction in the coincidence width (FWHM) from 20 ns to at least 14 ns, leading to a trigger rate reduction of a factor 1.5–2 with no efficiency loss on signal.

Comparison of the online positron–photon timing trigger selection efficiency of MEG II (blue) to MEG (red); the selection width (FWHM) for MEG II is 14 ns while previously it was 20 ns for MEG

The MEG II trigger rate is then expected to be \(\approx 10\) Hz, a value comparable with that of MEG. The improvements on the online reconstruction resolutions are therefore expected to compensate for the increased muon stopping rate.

9 Expected sensitivity

The estimation of the MEG II sensitivity follows the approach exploited in MEG [2]. A detailed MC simulation of the beam and the detector is implemented together with a reconstruction of the particle’s observables. The probability density functions (PDFs) of the observables relevant for discriminating signal from background are generated with the help of simulation and prototype data. Then, an ensemble of simulated experiments (toy MC) are generated from the PDFs and analysed extracting a set of upper limits (UL). Finally, the sensitivity is estimated.

9.1 Simulation and reconstruction

We developed a full simulation of the detector based on Geant4, adding information, where necessary, from measurements (e.g. light propagation properties in LXe) or dedicated simulations (e.g. ionisation density in the drift chamber from Garfield [154]). The Geant4 hits are then converted into simulated electronic signals, making use of waveform templates extracted from data collected with prototypes or with the final detectors. At this stage, we also mix different Geant4 events in order to simulate the pile-up of multiple muon decays within the same DAQ time window.

Both data and simulated events go through the same reconstruction chain. For each sub-detector, a waveform analysis is performed in order to extract raw observables, such as the signal time and charge. A hit reconstruction procedure is then applied to translate them into calibrated physical observables. The following variables are extracted:

1.

the drift time of the ionisation electrons in the drift chamber and the hit position along the z-coordinate,

2.

the hit time and position in each pTC and RDC PS tile and

3.

the number of collected photons in each photo-sensor of the LXe photon detector and RDC calorimeter.

Several reconstruction algorithms are then applied to extract the single particle’s observables. Most notably, dedicated pattern recognition algorithms and a Kalman filter technique are used to extract the positron track parameters; the positron is tracked through the pTC tiles to extract the best estimate of the positron time; number and timing of collected scintillation photons of each photo-detector in the LXe photon detector are used to extract the photon time and conversion vertex as well as the photon energy.

The probability density functions (PDFs) describing the distributions of each kinematic variable for the signal and the backgrounds are generated relying on MC simulated events or on data collected from prototypes.

A representative scenario for MEG II resolutions and efficiencies is summarised in Table 9 and compared to the MEG performance. The efficiency of the positron reconstruction is greatly improved to that of MEG, thanks to the high efficiency of the tracking system and to the optimised geometry of CDCH and pTC. The resolution on the relative time between the \({\mathrm {e}^{+}}\) and the \(\gamma \) is estimated to be \(\sigma _{t_{\mathrm {e}^+ \gamma }}\simeq 84\, \mathrm{ps}\) by adopting the most conservative estimation for the LXe photon detector timing resolution of \(\sigma _{t_{\mathrm {\gamma }}} \simeq {70}\, \mathrm{ps}\) and an error on the positron timing due to the pTC resolution of \(\sigma _{t_{\mathrm {e}^{+}}^\mathrm {pTC}} \simeq {31}\, \mathrm{ps}\), which includes an inter-counter calibration contribution \(\sigma _{t_{\mathrm {e}^{+}}}^\mathrm {inter{\text {-}}counter}/\sqrt{\bar{N}_{hit}}\simeq {10}\, \mathrm{ps}\), a synchronisation contribution between WDBs of \(\sigma ^\mathrm {WDB}_{t_{\mathrm {e}^{+}}}\simeq {25}\, \mathrm{ps}\) and a contribution due to the track extrapolation along the CDCH measured trajectory of \(\sigma ^\mathrm {CDCH}_{t_{\mathrm {e}^{+}}}\simeq {20}\, \mathrm{ps}\).

Table 9

Resolutions (Gaussian \(\sigma \)) and efficiencies of MEG II compared with those of MEG

As an example we show the \(E_{\mathrm {\gamma }}\) PDFs for signal (see Fig. 96) and accidental background events (see Fig. 97). The expected improvement in MEG II is visible by comparing these PDFs (blue) with the 2010 MEG data PDFs (red). In the \(E_{\mathrm {\gamma }}\) background PDFs various contributions are taken into account: RMD, photons from positron AIF and from bremsstrahlung on materials in the detector, pile-up events, as well as resolution effects. The configuration of the CDCH, with a smaller amount of material close to the LXe photon detector, reduces the AIF contribution, which is dominant for \(E_{\mathrm {\gamma }}>52\,\text {MeV}\), by about \(20\%\) with respect to the MEG detector. The combined effect of the increased resolution and of the lower high energy background is clearly visible in Fig. 97.

Comparison of the \(E_{\mathrm {\gamma }}\) PDFs for accidental background events based on the resolutions obtained in 2010 data (red) and on the projected value for the upgrade (blue). Differences in relative background contributions between RMD, AIF and pile-up are also taken into account

9.2 Analysis

Each toy MC is analysed using the maximum likelihood analysis technique developed following the MEG data analysis [2, 145, 146] to extract an UL at 90% CL on the number of signal events, following the prescription of [155], that is converted to an UL on \( \mathcal{B} ({\mu ^+ \rightarrow \hbox {e}^+ \gamma })\) by using the appropriate normalisation factor. This technique is more efficient and reliable than a box analysis, since all types of background are correctly folded in the global likelihood function and taken into account with their own statistical weights. The enhanced precision of the MEG II detectors allows a much better separation of the signal from the background and reduces significantly the spill of the photon and positron background distributions into the signal region, which is due to experimental resolution effects.

9.3 Sensitivity estimate

An ensemble of simulated experiments (toy MC) with a statistics comparable to the expected number of events during MEG II data taking are generated from the PDFs assuming zero signal events and an average number of radiative and accidental events obtained by extrapolating the results of the MEG experiment and taking into account the new detector performances. The numbers of RMD and accidental events are then left free to fluctuate, according to Poisson statistics. For each toy MC we extract an UL on the \( \mathcal{B} ({\mu ^+ \rightarrow \hbox {e}^+ \gamma })\). Following [2, 145, 146], we define as sensitivity the median of the distribution of the ULs obtained from the toy MCs. With respect to the average, the median turns out to be a more stable estimator against outliers.

Expected sensitivity and discovery potential of MEG II as a function of the DAQ time compared with the bounds set by MEG [2]. Assuming conservatively 20 DAQ weeks per year, we expect a branching ratio sensitivity of \({6 \times 10^{-14}}\) in 3 years

In Fig. 98 we show the evolution of the sensitivity as a function of the DAQ time (in weeks). Assuming conservatively 140 DAQ days per year, we can reach a sensitivity of \({6\times 10^{-14}}\) in three years. The sensitivity has been re-evaluated since the proposal [88] according to the updated estimations of the expected detector performances, the inclusion of the downstream RDC and a more conservative assumption on the DAQ time.

10 Conclusions

We have presented the detailed design of the components of the MEG II detector, together with a presentation of the scientific merits of the experiment. The MEG II detector results from a mixture of upgraded components of the MEG experiment (beam line, target, calibration, LXe photon detector) and of newly designed components (CDCH, pTC, RDC, trigger and DAQ). The design has been completed and construction and commissioning are ongoing.

The resolutions on the relevant physical variables are expected to improve by about a factor of 2, as suggested by simulation and preliminary results from laboratory and beam tests. Those improvements, together with an increase by more than a factor of 2 both in muon decay rate and signal detection efficiency, are expected to bring the sensitivity to the \({\mu ^+ \rightarrow \hbox {e}^+ \gamma }\) decay rate down to \({6\times 10^{-14}}\) in three years of data taking. In terms of discriminating power of parameters of models beyond the Standard Model, this limit is comparable to those achievable by the next generation of cLFV experiments exploiting other channels.

Footnotes

An usual selection criterion is to choose 90% efficient cuts on each of the variables (\(E_{\mathrm {\gamma }}\), \(p_{\mathrm {e}^{+}}\), \(\varTheta _{\mathrm {e}^+ \gamma }\), \(t_{\mathrm {e}^+ \gamma }\)) around the values expected for the signal: this criterion defines the selection efficiency to be \(\epsilon _\mathrm {s} = (0.9)^4\). This kind of analysis in which one counts the number of events within some selection cuts and compares the number found with predictions for the background is named “ box analysis”. MEG/MEG II adopt more refined analyses which take into account the different distributions of (\(E_{\mathrm {\gamma }}\), \(p_{\mathrm {e}^{+}}\), \(\varTheta _{\mathrm {e}^+ \gamma }\), \(t_{\mathrm {e}^+ \gamma }\)) for background and signal type events by using maximum likelihood methods.

This effect is enhanced by large Yukawa couplings. The large top Yukawa coupling does it in SUSY-GUT models. The neutrino Yukawa couplings can be the same order as those for quarks and charged leptons in the seesaw mechanism. In particular, in SO(10) GUT, neutrino Yukawa couplings are related to up-type ones and at least one of them should be as large as the top one [41].

Recent effective-field-theory analyses have shown that those operators valid at some high scale mix at the low energy scale where the experiments take place via the evolution of renormalisation-group equation [86]. Due to this mixing effect as well as higher order contributions, the limit on \( \mathcal{B} ({\mu ^+ \rightarrow \hbox {e}^+ \gamma })\) provides severe constraints also on operators other than (3).

Notes

Acknowledgements

We are grateful for the support and co-operation provided by PSI as the host laboratory and to the technical and engineering staff of our institutes. This work is supported by Schweizerischer Nationalfonds (SNF) Grant 200021 137738, 200020 162654, 200020 172706, 206021 157742 and 206021 177038 (Switzerland), DOE DEFG02-91ER40679 (USA), INFN (Italy) and MEXT/JSPS KAKENHI Grant Numbers JP22000004, JP25247034, JP26000004, JP15J10695, JP17J03308, JP17J04114, JP17K14267 and JSPS Overseas Research Fellowships 2014-0066 (Japan). Partial support of the Italian Ministry of University and Research (MIUR) Grant No. RBFR138EEU 001 is acknowledged.

ATLAS collaboration, Search for supersymmetry in final states with two same-sign or three leptons and jets using 36 fb\(^{-1}\) of \(\sqrt{s}=13\) TeV pp collision data with the ATLAS detector. J. High Energy Phys. 09, 084 (2017). https://doi.org/10.1007/JHEP09(2017)084. arXiv:1706.03731

A. Czarnecki, W.J. Marciano, K. Melnikov, Coherent muon–electron conversion in muonic atoms. in Proceedings of workshop on physics at the first muon collider and at the front end of a muon collider, ed. by A.C. Proc. (1998), pp. 409–418 . https://doi.org/10.1063/1.56214. arXiv:hep-ph/9801218

Copyright information

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.