This is a fixed-text formatted version of a Jupyter notebook

Spectral analysis of extended sources

Prerequisites:

Context

Many VHE sources in the Galaxy are extended. Studying them with a 1D spectral analysis is more complex than studying point sources. One often has to use complex (i.e. non circular) regions and more importantly, one has to take into account the fact that the instrument response is non uniform over the selectred region. A typical example is given by the supernova remnant RX J1713-3935 which is nearly 1 degree in diameter. See the following article.

Objective: Measure the spectrum of RX J1713-3945 in a 1 degree region fully enclosing it.

Proposed approach:

We have seen in the general presentation of the spectrum extraction for point sources, see the corresponding notebook, that Gammapy uses specific datasets makers to first produce reduced spectral data and then to extract OFF measurements with reflected background techniques: the gammapy.makers.SpectrumDatasetMaker and the gammapy.makers.ReflectedRegionsBackgroundMaker. However if the flag use_region_center is not set to False, the former simply computes the reduced IRFs at the center of the ON region (assumed to be circular).

This is no longer valid for extended sources. To be able to compute average responses in the ON region, we can set use_region_center=False with the gammapy.makers.SpectrumDatasetMaker, in which case the values of the IRFs are averaged over the entire region.

In summary we have to:

Here, we will use the RX J1713-3945 observations from the H.E.S.S. first public test data release. The tutorial is implemented with the intermediate level API.

Setup

As usual, we’ll start with some general imports…

[1]:
%matplotlib inline
import matplotlib.pyplot as plt
[2]:
import astropy.units as u
from astropy.coordinates import SkyCoord, Angle
from regions import CircleSkyRegion
from gammapy.maps import MapAxis, RegionGeom
from gammapy.modeling import Fit
from gammapy.data import DataStore
from gammapy.modeling.models import PowerLawSpectralModel, SkyModel
from gammapy.datasets import Datasets, SpectrumDataset
from gammapy.makers import (
    SafeMaskMaker,
    SpectrumDatasetMaker,
    ReflectedRegionsBackgroundMaker,
)

Select the data

We first set the datastore and retrieve a few observations from our source.

[3]:
datastore = DataStore.from_dir("$GAMMAPY_DATA/hess-dl3-dr1/")
obs_ids = [20326, 20327, 20349, 20350, 20396, 20397]
# In case you want to use all RX J1713 data in the HESS DR1
# other_ids=[20421, 20422, 20517, 20518, 20519, 20521, 20898, 20899, 20900]

observations = datastore.get_observations(obs_ids)
No HDU found matching: OBS_ID = 20326, HDU_TYPE = rad_max, HDU_CLASS = None
No HDU found matching: OBS_ID = 20327, HDU_TYPE = rad_max, HDU_CLASS = None
No HDU found matching: OBS_ID = 20349, HDU_TYPE = rad_max, HDU_CLASS = None
No HDU found matching: OBS_ID = 20350, HDU_TYPE = rad_max, HDU_CLASS = None
No HDU found matching: OBS_ID = 20396, HDU_TYPE = rad_max, HDU_CLASS = None
No HDU found matching: OBS_ID = 20397, HDU_TYPE = rad_max, HDU_CLASS = None

Prepare the datasets creation

Select the ON region

Here we take a simple 1 degree circular region because it fits well with the morphology of RX J1713-3945. More complex regions could be used e.g. ~regions.EllipseSkyRegion or ~regions.RectangleSkyRegion.

[4]:
target_position = SkyCoord(347.3, -0.5, unit="deg", frame="galactic")
radius = Angle("0.5 deg")
on_region = CircleSkyRegion(target_position, radius)

Define the geometries

This part is especially important. - We have to define first energy axes. They define the axes of the resulting gammapy.datasets.SpectrumDatasetOnOff. In particular, we have to be careful to the true energy axis: it has to cover a larger range than the reconstructed energy one. - Then we define the region geometry itself from the on region.

[5]:
# The binning of the final spectrum is defined here.
energy_axis = MapAxis.from_energy_bounds(0.1, 40.0, 10, unit="TeV")

# Reduced IRFs are defined in true energy (i.e. not measured energy).
energy_axis_true = MapAxis.from_energy_bounds(
    0.05, 100, 30, unit="TeV", name="energy_true"
)

geom = RegionGeom(on_region, axes=[energy_axis])

Create the makers

First we instantiate the target gammapy.datasets.SpectrumDataset.

[6]:
dataset_empty = SpectrumDataset.create(
    geom=geom,
    energy_axis_true=energy_axis_true,
)

Now we create its associated maker. Here we need to produce, counts, exposure and edisp (energy dispersion) entries. PSF and IRF background are not needed, therefore we don’t compute them.

IMPORTANT: Note that use_region_center is set to False. This is necessary so that the gammapy.makers.SpectrumDatasetMaker considers the whole region in the IRF computation and not only the center.

[7]:
maker = SpectrumDatasetMaker(
    selection=["counts", "exposure", "edisp"], use_region_center=False
)

Now we create the OFF background maker for the spectra. If we have an exclusion region, we have to pass it here. We also define the safe range maker.

[8]:
bkg_maker = ReflectedRegionsBackgroundMaker()
safe_mask_maker = SafeMaskMaker(methods=["aeff-max"], aeff_percent=10)

Perform the data reduction loop.

We can now run over selected observations. For each of them, we: - create the gammapy.datasets.SpectrumDataset - Compute the OFF via the reflected background method and create a gammapy.datasets.SpectrumDatasetOnOff object - Run the safe mask maker on it - Add the gammapy.datasets.SpectrumDatasetOnOff to the list.

[9]:
%%time
datasets = Datasets()

for obs in observations:
    # A SpectrumDataset is filled in this geometry
    dataset = maker.run(dataset_empty.copy(name=f"obs-{obs.obs_id}"), obs)

    # Define safe mask
    dataset = safe_mask_maker.run(dataset, obs)

    # Compute OFF
    dataset = bkg_maker.run(dataset, obs)

    # Append dataset to the list
    datasets.append(dataset)
CPU times: user 1.92 s, sys: 90.6 ms, total: 2.01 s
Wall time: 2.01 s
[10]:
datasets.meta_table
[10]:
Table length=6
NAMETYPETELESCOPOBS_IDRA_PNTDEC_PNT
degdeg
str9str20str4int64float64float64
obs-20326SpectrumDatasetOnOffHESS20326259.29852294921875-39.76222229003906
obs-20327SpectrumDatasetOnOffHESS20327257.4773254394531-39.76222229003906
obs-20349SpectrumDatasetOnOffHESS20349259.29852294921875-39.76222229003906
obs-20350SpectrumDatasetOnOffHESS20350257.4773254394531-39.76222229003906
obs-20396SpectrumDatasetOnOffHESS20396258.3879089355469-39.06222152709961
obs-20397SpectrumDatasetOnOffHESS20397258.3879089355469-40.462223052978516

Explore the results

We can peek at the content of the spectrum datasets

[11]:
datasets[0].peek();
../../../_images/tutorials_analysis_1D_extended_source_spectral_analysis_21_0.png

Cumulative excess and signficance

Finally, we can look at cumulative significance and number of excesses. This is done with the info_table method of gammapy.datasets.Datasets.

[12]:
info_table = datasets.info_table(cumulative=True)
[13]:
info_table
[13]:
Table length=6
namecountsbackgroundexcesssqrt_tsnprednpred_backgroundnpred_signalexposure_minexposure_maxlivetimeontimecounts_ratebackground_rateexcess_raten_binsn_fit_binsstat_typestat_sumcounts_offacceptanceacceptance_offalpha
m2 sm2 sss1 / s1 / s1 / s
str7int64float64float32float64float64float64float64float64float64float64float64float64float64float64int64int64str5float64int64float64float64float64
stacked12161045.5170.54.1594643359039911102.33333333333331102.3333333333333nan4305864.0422089504.01500.00908482074741683.00.81066175685550150.69699577861219310.11366597824330839109wstat43.2680016160142720919.018.00.5
stacked23392068.5270.54.72246429328289352158.6666666666672158.666666666667nan14651096.0831745088.02997.08306425809863366.00.78042548366239520.69017106154581630.09025442211657883109wstat72.234028246357641379.018.00.5
stacked35213040.5480.56.8807900514120043200.66666666666653200.6666666666665nan25027040.01240954624.04491.5854380726815048.00.78391028035544720.67693246447621060.10697781587923669109wstat121.0840271416698660819.018.00.5
stacked46844031.0653.08.114781931577734248.6666666666684248.666666666668nan29493964.01661560320.05989.2399297356616730.00.7820691865665050.67304032686797220.10902885969853283109wstat159.5381135162646280629.018.00.5
stacked58955020.33349609375874.66659.8699114385962625293.7544659974585293.754465997458nan39191584.02070336896.07488.240852653988413.00.78723429387433440.67042895586277050.11680533801156395109wstat214.86274893608885110309.019.7735867701300270.4523509740829468
stacked69855991.83349609375993.166510.2511136238356666305.4813155677636305.481315567763nan41748740.02499471872.08993.41216439008710095.00.77667962641115170.66624695794759040.11043266846356133109wstat238.19703760223325129739.019.486022112616660.45818132162094116
[14]:
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(121)
ax.plot(
    info_table["livetime"].to("h"),
    info_table["excess"],
    marker="o",
    ls="none",
)

plt.xlabel("Livetime [h]")
plt.ylabel("Excess events")

ax = fig.add_subplot(122)
ax.plot(
    info_table["livetime"].to("h"),
    info_table["sqrt_ts"],
    marker="o",
    ls="none",
)

plt.xlabel("Livetime [h]")
plt.ylabel("Sqrt(TS)");
../../../_images/tutorials_analysis_1D_extended_source_spectral_analysis_25_0.png

Perform spectral model fitting

Here we perform a joint fit.

We first create the model, here a simple powerlaw, and assign it to every dataset in the gammapy.datasets.Datasets.

[15]:
spectral_model = PowerLawSpectralModel(
    index=2, amplitude=2e-11 * u.Unit("cm-2 s-1 TeV-1"), reference=1 * u.TeV
)
model = SkyModel(spectral_model=spectral_model, name="RXJ 1713")

datasets.models = [model]

Now we can run the fit

[16]:
fit_joint = Fit()
result_joint = fit_joint.run(datasets=datasets)
print(result_joint)
OptimizeResult

        backend    : minuit
        method     : migrad
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 38
        total stat : 52.79

OptimizeResult

        backend    : minuit
        method     : migrad
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 38
        total stat : 52.79


Explore the fit results

First the fitted parameters values and their errors.

[17]:
datasets.models.to_parameters_table()
[17]:
Table length=3
modeltypenamevalueuniterrorminmaxfrozenlink
str8str8str9float64str14float64float64float64boolstr1
RXJ 1713spectralindex2.1102e+006.129e-02nannanFalse
RXJ 1713spectralamplitude1.3576e-11cm-2 s-1 TeV-19.757e-13nannanFalse
RXJ 1713spectralreference1.0000e+00TeV0.000e+00nannanTrue

Then plot the fit result to compare measured and expected counts. Rather than plotting them for each individual dataset, we stack all datasets and plot the fit result on the result.

[18]:
# First stack them all
reduced = datasets.stack_reduce()
# Assign the fitted model
reduced.models = model
# Plot the result

ax_spectrum, ax_residuals = reduced.plot_fit()
reduced.plot_masks(ax=ax_spectrum);
../../../_images/tutorials_analysis_1D_extended_source_spectral_analysis_33_0.png
[ ]: