Fitting Multiple Orders of HRC-S/LETG Data

Synopsis:

Because of the low energy resolution in the HRC-S, the PHA2
file contains two rows (negative and postive) containing all
the spectral orders. While it is not possible to separate
the overlapping orders, they can be modeled in
Sherpa by defining the instrument response as a
composite of the orders in which you are interested.

This thread uses response files (gRMFs and gARFs) created with
CIAO to model and fit the first three positive and negative
orders of the spectra.

The spectra that will be used in this session have already
been binned by a factor of 10. The data are input to
Sherpa with the load_pha command
(a subset of the load_data command):

sherpa> load_pha(1, "460_leg_m1_bin10.pha")
WARNING: systematic errors were not found in file '460_leg_m1_bin10.pha'
statistical errors were found in file '460_leg_m1_bin10.pha'
but not used; to use them, re-read with use_errors=True
read background_up into a dataset from file 460_leg_m1_bin10.pha
read background_down into a dataset from file 460_leg_m1_bin10.pha
sherpa> load_pha(2, "460_leg_p1_bin10.pha")
WARNING: systematic errors were not found in file '460_leg_p1_bin10.pha'
statistical errors were found in file '460_leg_p1_bin10.pha'
but not used; to use them, re-read with use_errors=True
read background_up into a dataset from file 460_leg_p1_bin10.pha
read background_down into a dataset from file 460_leg_p1_bin10.pha

Sherpa now refers to the negative-order spectrum as
data set 1 and the positive-order spectrum as data set 2.

To establish the instrument response, the individual instrument response files (gRMFs
and gARFs) need to be read into Sherpa for each of the six
spectral orders (+/- 1,2,3).
If the ARF and RMF filenames are recorded in the header of the PHA
file, Sherpa will load them
automatically; if not, they need to be loaded manually
with the load_arf/load_rmf
or load_multi_arfs/load_multi_rmfs
commands.

Several options are available for loading multiple
spectral responses:

We may list the data set and response IDs established in the
Sherpa session with the
list_data_ids and list_response_ids commands. For viewing detailed
information about all loaded data sets and associated ARF and
RMF responses, the show_all,
show_data, and show_bkg
commands are available (which provide the option
to save the output data to a file with the 'outfile' argument)
following list commands or
with get_data, and return information
about each ARF and RMF with the get_arf and
get_rmf commands.

The response orders are stored in the order in which they are loaded,
noted by the 'resp_id' value. The first response in the list
of response IDs is treated as the primary (usually resp_id=1), i.e. it
is used as the mapping from wavelength or energy units to spectral
channel. The primary response is used as the independent variable
grid for plotting, as well as for translating the y-axis to the proper
units (keV or Angstroms). During fitting, the calculation of the
source model uses the respective ARF grid from each of the
responses. Similarly, the respective response is used when folding
through the response matrix.

We can use ChIPS to adjust the features of the plot, such as
setting colors, changing font styles, and repositioning
objects. Below, get_data_plot_prefs is used to
to connect the data points with a solid line, and to remove data
point symbols and y-axis error bars from the plot. The
resulting plot is displayed in Figure 2.

There are two plate gaps in the HRC-S detector: one at ~50
Å and the other at ~60 Å; see Figure 7.1 in the POG
for the HRC-S detector layout. The effect of
dithering into these gaps appear in negative-order and
positive-order data, respectively, as a flat area of zero
counts. The regions where the data are in a plate gap are evident in
Figure 3.

There are two plate gaps in the HRC-S detector: one at ~50 Å
and the other at ~60 Å. The effect of dithering into these
gaps can be seen in the negative-order (top frame) and
positive-order data (bottom frame).

These regions are ignored in the fitting so that the statistic calculations
are not skewed:

We plan on subtracting the background from
the data (rather than fitting it
simultaneously with the source data), so it is only
necessary to create a source model expression. We model
this source with a broken power-law (xsbknpower) absorbed by the interstellar
medium (xswabs).

In this example, we choose to use the XSpec version of the models. These
models are defined such that the x-values are always
interpreted as energy
bins. When the analysis setting is using non-energy bins
(e.g., WAVE in this case) and an XSpec model is
defined, Sherpa converts the bins to energy before
sending them to the XSpec model. After the XSpec model
finishes, Sherpa converts back to the original
units. Sherpa also scales the model values appropriately
(e.g., if counts/keV came out of the XSpec model, and
Sherpa is working with wavelength bins, then Sherpa
scales the output of the XSpec model to counts/Angstrom).

The absorption model will be referred to as abs1, and the
broken power-law will be bpow1; the product of
the two is assigned as the source model for the data sets:

Note that if we had not set initial parameter values for
the model, we could have used the guess()Sherpa
function to estimate the initial
parameter values for each model component separately,
based on the data input to the session. To have
Sherpa automatically query for initial parameter values
when a model is established, set
'paramprompt(True)' (it is 'False' by
default).

We choose to set the break energy [keV] to a lower
starting point, and the hydrogen column
density (nH) is set to the Galactic value and
then frozen (which
means it will not be allowed to vary during the fit). The
rest of the parameters remain thawed.

sherpa> show_method()
Optimization Method: LevMar
name = levmar
ftol = 1.19209289551e-07
xtol = 1.19209289551e-07
gtol = 1.19209289551e-07
maxfev = None
epsfcn = 1.19209289551e-07
factor = 100.0
verbose = 0
sherpa> show_stat()
Statistic: Chi2Gehrels
Chi Squared with Gehrels variance.
The variance is estimated from the number of counts in each bin,
but unlike `Chi2DataVar`, the Gaussian approximation is not
used. This makes it more-suitable for use with low-count data.
The standard deviation for each bin is calculated using the
approximation from [1]_:
sigma(i,S) = 1 + sqrt(N(i,s) + 0.75)
where the higher-order terms have been dropped. This is accurate
to approximately one percent. For data where the background has
not been subtracted then the error term is:
sigma(i) = sigma(i,S)
whereas with background subtraction,
sigma(i)^2 = sigma(i,S)^2 + [A(S)/A(B)]^2 sigma(i,B)^2
Notes
-----
The accuracy of the error term when the background has been
subtracted has not been determined. A preferable approach to
background subtraction is to model the background as well as the
source signal.
References
----------
.. [1] "Confidence limits for small numbers of events in
astrophysical data", Gehrels, N. 1986, ApJ, vol 303,
p. 336-346.
http://adsabs.harvard.edu/abs/1986ApJ...303..336G
sherpa> set_method("neldermead")
sherpa> set_method_opt("finalsimplex", 0)
sherpa> set_stat("chi2xspecvar")

It is
important to note that when fitting data with
χ2 statistics, the data must be binned so
that no channels/bins have zero counts (see the Sherpagroup functions for the available
binning options). If the number of counts in a bin
is less than 1, the variance is set to 1 with
chi2xspecvar. We can use the group_counts function after subtracting the
background to specify that at
least one count should be contained in each data bin.
Since the original data was grouped, regrouping the data
clears the filter and will be required to be set again;
however, if the original data had been ungrouped, then
the ignore filter would be retained.

By omitting the regions of data over a plate
gap, the residuals are contained within approximately three sigma.
This will improve the statistical calculations shown
in the Examining Fit Results
section.

After creating a plot, it may be saved as a PostScript
file; in this example, "460_fit_delchi.ps" is returned:

sherpa> print_window("460_fit_delchi")

Finally, we may use the plot_order and
plot_model commands to plot spectral orders
individually or simultaneously. In the case of the latter,
we may use ChIPS commands to assign a different color to
each order to separate them visually.

The χ2 goodness-of-fit information is reported with the
best-fit values after a fit is performed; and the
get_fit_results and show_fit
commands allow subsequent access to this information post-fitting:

The number of bins in the fit (Data points), the
number of degrees of freedom (i.e. the number
of bins minus the number of free parameters), and the
final fit statistic value are reported. If the
chosen statistic is one of the χ2 statistics, as in
this example, the reduced statistic,
i.e. the statistic value divided by the number of degrees of
freedom, and the probability (Q-value), are
included as well.

The calc_chisqr command calculates the statistic
contribution per bin; in this example, the results for data
set 1 are returned:

The covar
(covariance) command can be
used to estimate confidence intervals for the thawed
parameters—though this method may not constrain parameters where the
parameter space is not smooth. In this case, we can try the
confidence methodconf:

The functions calc_photon_flux
and calc_energy_flux can be used to return the total
integrated photon or energy flux over the full range of orders,
computed over the combined high resolution ARF grid. The photon
flux returned is in photons/cm2/sec, and the energy flux in ergs/cm2/sec.

The
file fit.py
is
a Python
script which performs the primary commands used above; it can be executed by typing
exec(open("fit.py").read())
on the Sherpa command line.

The Sherpascript command may be used
to save everything typed on the command line in a
Sherpa session:

sherpa> script(filename="sherpa.log", clobber=False)

(Note that restoring a Sherpa session from such a file
could be problematic since it may include syntax errors,
unwanted fitting trials, et cetera.)

The CXC is committed to helping Sherpa users transition to
new syntax as smoothly as possible. If you have existing
Sherpa scripts or save files, submit them to us via the
CXC Helpdesk and we will provide the
CIAO/Sherpa 4.9 syntax to you.