This is a fixed-text formatted version of a Jupyter notebook

# Spectral analysis of extended sources¶

## Context¶

Many VHE sources in the Galaxy are extended. Studying them with a 1D spectral analysis is more complex than studying point sources. One often has to use complex (i.e. non circular) regions and more importantly, one has to take into account the fact that the instrument response is non uniform over the selectred region. A typical example is given by the supernova remnant RX J1713-3935 which is nearly 1 degree in diameter. See the following article.

Objective: Measure the spectrum of RX J1713-3945 in a 1 degree region fully enclosing it.

## Proposed approach:¶

We have seen in the general presentation of the spectrum extraction for point sources, see the corresponding notebook, that Gammapy uses specific datasets makers to first produce reduced spectral data and then to extract OFF measurements with reflected background techniques: the gammapy.makers.SpectrumDatasetMaker and the gammapy.makers.ReflectedRegionsBackgroundMaker. The former simply computes the reduced IRF at the center of the ON region (assumed to be circular).

This is no longer valid for extended sources. To be able to compute average responses in the ON region, Gammapy relies on the creation of a cube enclosing it (i.e. a gammapy.datasets.MapDataset) which can be reduced to a simple spectrum (i.e. a gammapy.datasets.SpectrumDataset). We can then proceed with the OFF extraction as the standard point source case.

In summary, we have to:

• Define an ON region (a ~regions.SkyRegion) fully enclosing the source we want to study.

• Define a geometry that fully contains the region and that covers the required energy range (beware in particular, the true energy range).

• Create the necessary makers :

• the map dataset maker : gammapy.makers.MapDatasetMaker

• the OFF background maker, here a gammapy.makers.ReflectedRegionsBackgroundMaker

• and usually the safe range maker : gammapy.makers.SafeRangeMaker

• Perform the data reduction loop. And for every observation:

• Produce a map dataset and squeeze it to a spectrum dataset with gammapy.datasets.MapDataset.to_spectrum_dataset(on_region)

• Extract the OFF data to produce a gammapy.datasets.SpectrumDatasetOnOff and compute a safe range for it.

• Stack or store the resulting spectrum dataset.

• Finally proceed with model fitting on the dataset as usual.

Here, we will use the RX J1713-3945 observations from the H.E.S.S. first public test data release. The tutorial is implemented with the intermediate level API.

## Setup¶

[1]:

%matplotlib inline
import matplotlib.pyplot as plt

[2]:

import astropy.units as u
from astropy.coordinates import SkyCoord, Angle
from regions import CircleSkyRegion
from gammapy.maps import Map, MapAxis, WcsGeom
from gammapy.modeling import Fit
from gammapy.data import DataStore
from gammapy.modeling.models import PowerLawSpectralModel, SkyModel
from gammapy.datasets import Datasets, MapDataset
from gammapy.makers import (
MapDatasetMaker,
ReflectedRegionsBackgroundMaker,
)


## Select the data¶

We first set the datastore and retrieve a few observations from our source.

[3]:

datastore = DataStore.from_dir("$GAMMAPY_DATA/hess-dl3-dr1/") obs_ids = [20326, 20327, 20349, 20350, 20396, 20397] # In case you want to use all RX J1713 data in the HESS DR1 # other_ids=[20421, 20422, 20517, 20518, 20519, 20521, 20898, 20899, 20900] observations = datastore.get_observations(obs_ids)  ## Prepare the datasets creation¶ ### Select the ON region¶ Here we take a simple 1 degree circular region because it fits well with the morphology of RX J1713-3945. More complex regions could be used e.g. ~regions.EllipseSkyRegion or ~regions.RectangleSkyRegion. [4]:  target_position = SkyCoord(347.3, -0.5, unit="deg", frame="galactic") radius = Angle("0.5 deg") on_region = CircleSkyRegion(target_position, radius)  ### Define the geometries¶ This part is especially important. - We have to define first energy axes. They define the axes of the resulting gammapy.datasets.SpectrumDatasetOnOff. In particular, we have to be careful to the true energy axis: it has to cover a larger range than the reconstructed energy one. - Then we define the geometry itself. It does not need to be very finely binned and should enclose all the ON region. To limit CPU and memory usage, one should avoid using a much larger region. [5]:  # The binning of the final spectrum is defined here. energy_axis = MapAxis.from_energy_bounds(0.3, 40.0, 10, unit="TeV") # Reduced IRFs are defined in true energy (i.e. not measured energy). energy_axis_true = MapAxis.from_energy_bounds( 0.05, 100, 30, unit="TeV", name="energy_true" ) # Here we use 1.5 degree which is slightly larger than needed. geom = WcsGeom.create( skydir=target_position, binsz=0.04, width=(1.5, 1.5), frame="galactic", proj="CAR", axes=[energy_axis], )  ### Create the makers¶ First we instantiate the target gammapy.datasets.MapDataset. [6]:  stacked = MapDataset.create( geom=geom, energy_axis_true=energy_axis_true, name="rxj-stacked" )  Now we create its associated maker. Here we need to produce, counts, exposure and edisp (energy dispersion) entries. PSF and IRF background are not needed, therefore we don’t compute them. [7]:  maker = MapDatasetMaker(selection=["counts", "exposure", "edisp"])  Now we create the OFF background maker for the spectra. If we have an exclusion region, we have to pass it here. We also define the safe range maker. [8]:  bkg_maker = ReflectedRegionsBackgroundMaker() safe_mask_maker = SafeMaskMaker( methods=["aeff-default", "aeff-max"], aeff_percent=10 )  ## Perform the data reduction loop.¶ We can now run over selected observations. For each of them, we: - create the map dataset and stack it on our target dataset. - squeeze the map dataset to a spectral dataset in the ON region - Compute the OFF and create a gammapy.datasets.SpectrumDatasetOnOff object - Run the safe mask maker on it - Add the gammapy.datasets.SpectrumDatasetOnOff to the list. [9]:  %%time spectrum_datasets = [] for obs in observations: # A MapDataset is filled in this geometry dataset = maker.run(stacked, obs) # To make images, the resulting dataset cutout is stacked onto the final one stacked.stack(dataset) # Extract 1D spectrum spectrum_dataset = dataset.to_spectrum_dataset(on_region) # Compute OFF spectrum_dataset = bkg_maker.run(spectrum_dataset, obs) # Define safe mask spectrum_dataset = safe_mask_maker.run(spectrum_dataset, obs) # Append dataset to the list spectrum_datasets.append(spectrum_dataset) datasets = Datasets(spectrum_datasets)  No background model defined for dataset rxj-stacked No background model defined for dataset rxj-stacked No background model defined for dataset xqydEZQ2 No background model defined for dataset xqydEZQ2 No background model defined for dataset rxj-stacked No background model defined for dataset rxj-stacked No background model defined for dataset 0mmqLOUU No background model defined for dataset 0mmqLOUU No background model defined for dataset rxj-stacked No background model defined for dataset rxj-stacked No background model defined for dataset jYzPEntg No background model defined for dataset jYzPEntg No background model defined for dataset rxj-stacked No background model defined for dataset rxj-stacked No background model defined for dataset eG2etPji No background model defined for dataset eG2etPji No background model defined for dataset rxj-stacked No background model defined for dataset rxj-stacked No background model defined for dataset knczIFtq No background model defined for dataset knczIFtq No background model defined for dataset rxj-stacked No background model defined for dataset rxj-stacked No background model defined for dataset 7eXBq6Wh No background model defined for dataset 7eXBq6Wh  CPU times: user 4.96 s, sys: 170 ms, total: 5.13 s Wall time: 5.11 s  ## Explore the results¶ First let’s look at the data to see if our region is correct. We plot it over the excess. To do so we convert it to a pixel region using the WCS information stored on the geom. [10]:  stacked.counts.sum_over_axes().smooth(width="0.05 deg").plot() on_region.to_pixel(stacked.counts.geom.wcs).plot()  [10]:  <WCSAxesSubplot:xlabel='Galactic Longitude', ylabel='Galactic Latitude'>  We now turn to the spectral datasets. We can peek at their content: [11]:  datasets[0].peek()  ### Cumulative excess and signficance¶ Finally, we can look at cumulative significance and number of excesses. This is done with the info_table method of gammapy.datasets.Datasets. [12]:  info_table = datasets.info_table(cumulative=True)  No background model defined for dataset xqydEZQ2 No background model defined for dataset xqydEZQ2 No background model defined for dataset xqydEZQ2 No background model defined for dataset xqydEZQ2 No background model defined for dataset xqydEZQ2  [13]:  info_table  [13]:  Table length=6 namelivetimen_onbackgroundexcesssignificancebackground_rategamma_ratea_onn_offa_offalpha s1 / s1 / s str8float64float32float64float64float64float64float64float64float32float64float64 xqydEZQ21683.0450.0343.5106.54.4055176261860960.204099821746880570.063279857397504451.0687.02.00.5 xqydEZQ23366.0845.0670.5174.55.2154057631580920.199197860962566840.051841948900772431.01341.02.00.5 xqydEZQ25048.01288.0983.0305.07.4578748831235570.194730586370839930.060419968304278921.01966.02.00.5 xqydEZQ26730.01725.01341.0384.08.0752575056283820.199257057949479950.057057949479940561.02682.02.00.5 xqydEZQ28413.02198.01653.0001220703125545.000061035156210.2309529574226290.19648165007373260.064780703795929661.03618.02.17847776412963870.4590361416339874 xqydEZQ210095.02626.02001.5001220703125624.500061035156210.6680560217692230.19826648064094230.06186231411938151.04315.02.14661502838134770.46584972739219666 [14]:  fig = plt.figure(figsize=(10, 6)) ax = fig.add_subplot(121) ax.plot( info_table["livetime"].to("h"), info_table["excess"], marker="o", ls="none" ) plt.xlabel("Livetime [h]") plt.ylabel("Excess events") ax = fig.add_subplot(122) ax.plot( info_table["livetime"].to("h"), info_table["significance"], marker="o", ls="none", ) plt.xlabel("Livetime [h]") plt.ylabel("Significance");  [14]:  Text(0, 0.5, 'Significance')  ## Perform spectral model fitting¶ Here we perform a joint fit. We first create the model, here a simple powerlaw, and assign it to every dataset in the gammapy.datasets.Datasets. [15]:  spectral_model = PowerLawSpectralModel( index=2, amplitude=2e-11 * u.Unit("cm-2 s-1 TeV-1"), reference=1 * u.TeV ) model = SkyModel(spectral_model=spectral_model) for dataset in datasets: dataset.models = model  No background model defined for dataset xqydEZQ2 No background model defined for dataset 0mmqLOUU No background model defined for dataset jYzPEntg No background model defined for dataset eG2etPji No background model defined for dataset knczIFtq No background model defined for dataset 7eXBq6Wh  Now we can run the fit [16]:  fit_joint = Fit(datasets) result_joint = fit_joint.run() print(result_joint)  OptimizeResult backend : minuit method : minuit success : True message : Optimization terminated successfully. nfev : 43 total stat : 72.95  ### Explore the fit results¶ First the fitted parameters values and their errors. [17]:  result_joint.parameters.to_table()  [17]:  Table length=3 namevalueunitminmaxfrozenerror str9float64str14float64float64boolfloat64 index2.102e+00nannanFalse6.692e-02 amplitude1.287e-11cm-2 s-1 TeV-1nannanFalse1.035e-12 reference1.000e+00TeVnannanTrue0.000e+00 Then plot the fit result to compare measured and expected counts. Rather than plotting them for each individual dataset, we stack all datasets and plot the fit result on the result. [18]:  plt.figure(figsize=(8, 6)) # First stack them all reduced = datasets.stack_reduce() # Assign the fitted model reduced.models = model # Plot the result reduced.plot_fit();  No background model defined for dataset ldIvxJSC No background model defined for dataset ldIvxJSC No background model defined for dataset ldIvxJSC No background model defined for dataset ldIvxJSC No background model defined for dataset ldIvxJSC No background model defined for dataset ldIvxJSC  [18]:  (<AxesSubplot:xlabel='Energy [TeV]', ylabel='$\\mathrm{}\$'>,
<AxesSubplot:xlabel='Energy [TeV]', ylabel='Residuals (data - model)'>)