This is a fixed-text formatted version of a Jupyter notebook

Spectral analysis with Gammapy

Prerequisites

  • Understanding how spectral extraction is performed in Cherenkov astronomy, in particular regarding OFF background measurements.

  • Understanding the basics data reduction and modeling/fitting process with the gammapy library API as shown in the first gammapy analysis with the gammapy library API tutorial

Context

While 3D analysis allows in principle to deal with complex situations such as overlapping sources, in many cases, it is not required to extract the spectrum of a source. Spectral analysis, where all data inside a ON region are binned into 1D datasets, provides a nice alternative.

In classical Cherenkov astronomy, it is used with a specific background estimation technique that relies on OFF measurements taken in the field-of-view in regions where the background rate is assumed to be equal to the one in the ON region.

This allows to use a specific fit statistics for ON-OFF measurements, the wstat (see gammapy.stats.fit_statistics), where no background model is assumed. Background is treated as a set of nuisance parameters. This removes some systematic effects connected to the choice or the quality of the background model. But this comes at the expense of larger statistical uncertainties on the fitted model parameters.

Objective: perform a full region based spectral analysis of 4 Crab observations of H.E.S.S. data release 1 and fit the resulting datasets.

Introduction

Here, as usual, we use the gammapy.data.DataStore to retrieve a list of selected observations (gammapy.data.Observations). Then, we define the ON region containing the source and the geometry of the gammapy.datasets.SpectrumDataset object we want to produce. We then create the corresponding dataset Maker.

We have to define the Maker object that will extract the OFF counts from reflected regions in the field-of-view. To ensure we use data in an energy range where the quality of the IRFs is good enough we also create a safe range Maker.

We can then proceed with data reduction with a loop over all selected observations to produce datasets in the relevant geometry.

We can then explore the resulting datasets and look at the cumulative signal and significance of our source. We finally proceed with model fitting.

In practice, we have to: - Create a gammapy.data.DataStore poiting to the relevant data - Apply an observation selection to produce a list of observations, a gammapy.data.Observations object. - Define a geometry of the spectrum we want to produce: - Create a ~regions.CircleSkyRegion for the ON extraction region - Create a gammapy.maps.MapAxis for the energy binnings: one for the reconstructed (i.e. measured) energy, the other for the true energy (i.e. the one used by IRFs and models) - Create the necessary makers : - the spectrum dataset maker : gammapy.makers.SpectrumDatasetMaker - the OFF background maker, here a gammapy.makers.ReflectedRegionsBackgroundMaker - and the safe range maker : gammapy.makers.SafeRangeMaker - Perform the data reduction loop. And for every observation: - Apply the makers sequentially to produce a gammapy.datasets.SpectrumDatasetOnOff - Append it to list of datasets - Define the gammapy.modeling.models.SkyModel to apply to the dataset. - Create a gammapy.modeling.Fit object and run it to fit the model parameters - Apply a gammapy.estimators.FluxPointsEstimator to compute flux points for the spectral part of the fit.

Setup

As usual, we’ll start with some setup …

[1]:
%matplotlib inline
import matplotlib.pyplot as plt
[2]:
# Check package versions
import gammapy
import numpy as np
import astropy
import regions

print("gammapy:", gammapy.__version__)
print("numpy:", np.__version__)
print("astropy", astropy.__version__)
print("regions", regions.__version__)
gammapy: 0.18.dev690+gede9fe4b8
numpy: 1.19.1
astropy 4.0.1.post1
regions 0.4
[3]:
from pathlib import Path
import astropy.units as u
from astropy.coordinates import SkyCoord, Angle
from regions import CircleSkyRegion
from gammapy.maps import Map, MapAxis
from gammapy.modeling import Fit
from gammapy.data import DataStore
from gammapy.datasets import (
    Datasets,
    SpectrumDataset,
    SpectrumDatasetOnOff,
    FluxPointsDataset,
)
from gammapy.modeling.models import (
    PowerLawSpectralModel,
    create_crab_spectral_model,
    SkyModel,
)
from gammapy.makers import (
    SafeMaskMaker,
    SpectrumDatasetMaker,
    ReflectedRegionsBackgroundMaker,
)
from gammapy.estimators import FluxPointsEstimator
from gammapy.visualization import plot_spectrum_datasets_off_regions

Load Data

First, we select and load some H.E.S.S. observations of the Crab nebula (simulated events for now).

We will access the events, effective area, energy dispersion, livetime and PSF for containement correction.

[4]:
datastore = DataStore.from_dir("$GAMMAPY_DATA/hess-dl3-dr1/")
obs_ids = [23523, 23526, 23559, 23592]
observations = datastore.get_observations(obs_ids)

Define Target Region

The next step is to define a signal extraction region, also known as on region. In the simplest case this is just a CircleSkyRegion, but here we will use the Target class in gammapy that is useful for book-keeping if you run several analysis in a script.

[5]:
target_position = SkyCoord(ra=83.63, dec=22.01, unit="deg", frame="icrs")
on_region_radius = Angle("0.11 deg")
on_region = CircleSkyRegion(center=target_position, radius=on_region_radius)

Create exclusion mask

We will use the reflected regions method to place off regions to estimate the background level in the on region. To make sure the off regions don’t contain gamma-ray emission, we create an exclusion mask.

Using http://gamma-sky.net/ we find that there’s only one known gamma-ray source near the Crab nebula: the AGN called RGB J0521+212 at GLON = 183.604 deg and GLAT = -8.708 deg.

[6]:
exclusion_region = CircleSkyRegion(
    center=SkyCoord(183.604, -8.708, unit="deg", frame="galactic"),
    radius=0.5 * u.deg,
)

skydir = target_position.galactic
exclusion_mask = Map.create(
    npix=(150, 150), binsz=0.05, skydir=skydir, proj="TAN", frame="icrs"
)

mask = exclusion_mask.geom.region_mask([exclusion_region], inside=False)
exclusion_mask.data = mask
exclusion_mask.plot();
[6]:
(<Figure size 432x288 with 1 Axes>,
 <WCSAxesSubplot:xlabel='Right Ascension', ylabel='Declination'>,
 None)
../_images/tutorials_spectrum_analysis_12_1.png

Run data reduction chain

We begin with the configuration of the maker classes:

[7]:
e_reco = MapAxis.from_energy_bounds(0.1, 40, 40, unit="TeV", name="energy")
e_true = MapAxis.from_energy_bounds(
    0.05, 100, 200, unit="TeV", name="energy_true"
)
dataset_empty = SpectrumDataset.create(
    e_reco=e_reco, e_true=e_true, region=on_region
)
[8]:
dataset_maker = SpectrumDatasetMaker(
    containment_correction=False, selection=["counts", "aeff", "edisp"]
)
bkg_maker = ReflectedRegionsBackgroundMaker(exclusion_mask=exclusion_mask)
safe_mask_masker = SafeMaskMaker(methods=["aeff-max"], aeff_percent=10)
[9]:
%%time
datasets = Datasets()

for obs_id, observation in zip(obs_ids, observations):
    dataset = dataset_maker.run(
        dataset_empty.copy(name=str(obs_id)), observation
    )
    dataset_on_off = bkg_maker.run(dataset, observation)
    dataset_on_off = safe_mask_masker.run(dataset_on_off, observation)
    datasets.append(dataset_on_off)
No background model defined for dataset 23523
No background model defined for dataset 23523
No background model defined for dataset 23526
No background model defined for dataset 23526
No background model defined for dataset 23559
No background model defined for dataset 23559
No background model defined for dataset 23592
No background model defined for dataset 23592
CPU times: user 3.67 s, sys: 101 ms, total: 3.77 s
Wall time: 3.76 s

Plot off regions

[10]:
plt.figure(figsize=(8, 8))
_, ax, _ = exclusion_mask.plot()
on_region.to_pixel(ax.wcs).plot(ax=ax, edgecolor="k")
plot_spectrum_datasets_off_regions(ax=ax, datasets=datasets)
../_images/tutorials_spectrum_analysis_18_0.png

Source statistic

Next we’re going to look at the overall source statistics in our signal region.

[11]:
info_table = datasets.info_table(cumulative=True)
No background model defined for dataset 23523
No background model defined for dataset 23523
No background model defined for dataset 23523
[12]:
info_table
[12]:
Table length=4
namelivetimen_onbackgroundexcesssignificancebackground_rategamma_ratea_onn_offa_offalpha
s1 / s1 / s
str5float64float32float64float64float64float64float64float64float32float64float64
235231581.7367584109306177.014.916666666666666162.0833333333333121.0508385900417250.009430562062458790.102471749784817561.0179.012.00.08333333333333333
235233154.4234824180603353.031.249998092651367321.749969482421929.3654371824898670.0099067225015381610.101999611426865451.0375.012.00.0833333358168602
235234732.546999931335503.041.884151458740234461.115875244140636.9425982797312860.0088502346536332190.097435033450450881.0811.019.3629341125488280.05164506658911705
235236313.811640620232632.053.3841552734375578.615905761718841.679007148761580.008455139036772620.091642883680463891.01225.022.946886062622070.043578896671533585
[13]:
plt.plot(
    info_table["livetime"].to("h"), info_table["excess"], marker="o", ls="none"
)
plt.xlabel("Livetime [h]")
plt.ylabel("Excess");
[13]:
Text(0, 0.5, 'Excess')
../_images/tutorials_spectrum_analysis_22_1.png
[14]:
plt.plot(
    info_table["livetime"].to("h"),
    info_table["significance"],
    marker="o",
    ls="none",
)
plt.xlabel("Livetime [h]")
plt.ylabel("Significance");
[14]:
Text(0, 0.5, 'Significance')
../_images/tutorials_spectrum_analysis_23_1.png

Finally you can write the extrated datasets to disk using the OGIP format (PHA, ARF, RMF, BKG, see here for details):

[15]:
path = Path("spectrum_analysis")
path.mkdir(exist_ok=True)
[16]:
for dataset in datasets:
    dataset.to_ogip_files(outdir=path, overwrite=True)
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
  return array(a, dtype, copy=False, order=order)

If you want to read back the datasets from disk you can use:

[17]:
datasets = Datasets()
for obs_id in obs_ids:
    filename = path / f"pha_obs{obs_id}.fits"
    datasets.append(SpectrumDatasetOnOff.from_ogip_files(filename))
No background model defined for dataset 23523
No background model defined for dataset 23526
No background model defined for dataset 23559
No background model defined for dataset 23592

Fit spectrum

Now we’ll fit a global model to the spectrum. First we do a joint likelihood fit to all observations. If you want to stack the observations see below. We will also produce a debug plot in order to show how the global fit matches one of the individual observations.

[18]:
spectral_model = PowerLawSpectralModel(
    index=2, amplitude=2e-11 * u.Unit("cm-2 s-1 TeV-1"), reference=1 * u.TeV
)
model = SkyModel(spectral_model=spectral_model, name="crab")

for dataset in datasets:
    dataset.models = model

fit_joint = Fit(datasets)
result_joint = fit_joint.run()

# we make a copy here to compare it later
model_best_joint = model.copy()
No background model defined for dataset 23523
No background model defined for dataset 23526
No background model defined for dataset 23559
No background model defined for dataset 23592

Fit quality and model residuals

We can access the results dictionary to see if the fit converged:

[19]:
print(result_joint)
OptimizeResult

        backend    : minuit
        method     : minuit
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 49
        total stat : 124.41

A simple way to inspect the model residuals is using the function ~SpectrumDataset.plot_fit()

[20]:
plt.figure(figsize=(8, 6))
ax_spectrum, ax_residual = datasets[0].plot_fit()
ax_spectrum.set_ylim(0.1, 40)
[20]:
(0.1, 40)
../_images/tutorials_spectrum_analysis_35_1.png

For more ways of assessing fit quality, please refer to the dedicated modeling and fitting tutorial.

Compute Flux Points

To round up our analysis we can compute flux points by fitting the norm of the global model in energy bands. We’ll use a fixed energy binning for now:

[21]:
e_min, e_max = 0.7, 30
e_edges = np.logspace(np.log10(e_min), np.log10(e_max), 11) * u.TeV

Now we create an instance of the gammapy.estimators.FluxPointsEstimator, by passing the dataset and the energy binning:

[22]:
fpe = FluxPointsEstimator(e_edges=e_edges, source="crab")
flux_points = fpe.run(datasets=datasets)
No background model defined for dataset 23523-0.700 TeV-1.019 TeV
No background model defined for dataset 23523-0.700 TeV-1.019 TeV
No background model defined for dataset 23526-0.700 TeV-1.019 TeV
No background model defined for dataset 23526-0.700 TeV-1.019 TeV
No background model defined for dataset 23559-0.700 TeV-1.019 TeV
No background model defined for dataset 23559-0.700 TeV-1.019 TeV
No background model defined for dataset 23592-0.700 TeV-1.019 TeV
No background model defined for dataset 23592-0.700 TeV-1.019 TeV
No background model defined for dataset 23523-1.019 TeV-1.484 TeV
No background model defined for dataset 23523-1.019 TeV-1.484 TeV
No background model defined for dataset 23526-1.019 TeV-1.484 TeV
No background model defined for dataset 23526-1.019 TeV-1.484 TeV
No background model defined for dataset 23559-1.019 TeV-1.484 TeV
No background model defined for dataset 23559-1.019 TeV-1.484 TeV
No background model defined for dataset 23592-1.019 TeV-1.484 TeV
No background model defined for dataset 23592-1.019 TeV-1.484 TeV
No background model defined for dataset 23523-1.484 TeV-2.161 TeV
No background model defined for dataset 23523-1.484 TeV-2.161 TeV
No background model defined for dataset 23526-1.484 TeV-2.161 TeV
No background model defined for dataset 23526-1.484 TeV-2.161 TeV
No background model defined for dataset 23559-1.484 TeV-2.161 TeV
No background model defined for dataset 23559-1.484 TeV-2.161 TeV
No background model defined for dataset 23592-1.484 TeV-2.161 TeV
No background model defined for dataset 23592-1.484 TeV-2.161 TeV
No background model defined for dataset 23523-2.161 TeV-3.147 TeV
No background model defined for dataset 23523-2.161 TeV-3.147 TeV
No background model defined for dataset 23526-2.161 TeV-3.147 TeV
No background model defined for dataset 23526-2.161 TeV-3.147 TeV
No background model defined for dataset 23559-2.161 TeV-3.147 TeV
No background model defined for dataset 23559-2.161 TeV-3.147 TeV
No background model defined for dataset 23592-2.161 TeV-3.147 TeV
No background model defined for dataset 23592-2.161 TeV-3.147 TeV
No background model defined for dataset 23523-3.147 TeV-4.583 TeV
No background model defined for dataset 23523-3.147 TeV-4.583 TeV
No background model defined for dataset 23526-3.147 TeV-4.583 TeV
No background model defined for dataset 23526-3.147 TeV-4.583 TeV
No background model defined for dataset 23559-3.147 TeV-4.583 TeV
No background model defined for dataset 23559-3.147 TeV-4.583 TeV
No background model defined for dataset 23592-3.147 TeV-4.583 TeV
No background model defined for dataset 23592-3.147 TeV-4.583 TeV
No background model defined for dataset 23523-4.583 TeV-6.673 TeV
No background model defined for dataset 23523-4.583 TeV-6.673 TeV
No background model defined for dataset 23526-4.583 TeV-6.673 TeV
No background model defined for dataset 23526-4.583 TeV-6.673 TeV
No background model defined for dataset 23559-4.583 TeV-6.673 TeV
No background model defined for dataset 23559-4.583 TeV-6.673 TeV
No background model defined for dataset 23592-4.583 TeV-6.673 TeV
No background model defined for dataset 23592-4.583 TeV-6.673 TeV
No background model defined for dataset 23523-6.673 TeV-9.717 TeV
No background model defined for dataset 23523-6.673 TeV-9.717 TeV
No background model defined for dataset 23526-6.673 TeV-9.717 TeV
No background model defined for dataset 23526-6.673 TeV-9.717 TeV
No background model defined for dataset 23559-6.673 TeV-9.717 TeV
No background model defined for dataset 23559-6.673 TeV-9.717 TeV
No background model defined for dataset 23592-6.673 TeV-9.717 TeV
No background model defined for dataset 23592-6.673 TeV-9.717 TeV
No background model defined for dataset 23523-9.717 TeV-14.149 TeV
No background model defined for dataset 23523-9.717 TeV-14.149 TeV
No background model defined for dataset 23526-9.717 TeV-14.149 TeV
No background model defined for dataset 23526-9.717 TeV-14.149 TeV
No background model defined for dataset 23559-9.717 TeV-14.149 TeV
No background model defined for dataset 23559-9.717 TeV-14.149 TeV
No background model defined for dataset 23592-9.717 TeV-14.149 TeV
No background model defined for dataset 23592-9.717 TeV-14.149 TeV
No background model defined for dataset 23523-14.149 TeV-20.602 TeV
No background model defined for dataset 23523-14.149 TeV-20.602 TeV
No background model defined for dataset 23526-14.149 TeV-20.602 TeV
No background model defined for dataset 23526-14.149 TeV-20.602 TeV
No background model defined for dataset 23559-14.149 TeV-20.602 TeV
No background model defined for dataset 23559-14.149 TeV-20.602 TeV
No background model defined for dataset 23592-14.149 TeV-20.602 TeV
No background model defined for dataset 23592-14.149 TeV-20.602 TeV
No background model defined for dataset 23523-20.602 TeV-30.000 TeV
No background model defined for dataset 23523-20.602 TeV-30.000 TeV
No background model defined for dataset 23526-20.602 TeV-30.000 TeV
No background model defined for dataset 23526-20.602 TeV-30.000 TeV
No background model defined for dataset 23559-20.602 TeV-30.000 TeV
No background model defined for dataset 23559-20.602 TeV-30.000 TeV
No background model defined for dataset 23592-20.602 TeV-30.000 TeV
No background model defined for dataset 23592-20.602 TeV-30.000 TeV
Value 0.0 is outside bounds [0.0, 100000.0] for parameter 'norm'
Value 0.0 is outside bounds [0.0, 100000.0] for parameter 'norm'

Here is a the table of the resulting flux points:

[23]:
flux_points.table_formatted
[23]:
Table length=10
counts [4]e_refe_mine_maxref_dnderef_fluxref_efluxref_e2dndenormstatsuccessnorm_errtssqrt_tsnorm_errpnorm_errnnorm_ulnorm_scan [11]stat_scan [11]dndednde_uldnde_errdnde_errpdnde_errn
TeVTeVTeV1 / (cm2 s TeV)1 / (cm2 s)TeV / (cm2 s)TeV / (cm2 s)1 / (cm2 s TeV)1 / (cm2 s TeV)1 / (cm2 s TeV)1 / (cm2 s TeV)1 / (cm2 s TeV)
int64float64float64float64float64float64float64float64float64float64boolfloat64float64float64float64float64float64float64float64float64float64float64float64float64
61 .. 420.8770.7011.0993.772e-111.519e-111.309e-112.905e-110.96616.146True0.078584.99224.1870.0750.0721.1210.200 .. 5.000248.574 .. 934.3983.644e-114.227e-112.939e-122.844e-122.716e-12
20 .. 251.2761.0991.4821.431e-115.524e-126.992e-122.331e-110.99010.939True0.105264.79716.2730.1200.1111.2380.200 .. 5.000114.917 .. 385.5481.418e-111.772e-111.498e-121.711e-121.592e-12
36 .. 201.8561.4822.3235.432e-124.626e-128.430e-121.871e-111.2127.844True0.117350.37618.7180.1290.1211.4780.200 .. 5.000167.005 .. 342.9346.586e-128.029e-126.357e-136.989e-136.551e-13
11 .. 122.6992.3233.1352.061e-121.682e-124.503e-121.501e-111.1959.790True0.175157.66812.5570.1920.1741.5970.200 .. 5.00080.721 .. 165.4442.464e-123.291e-123.607e-133.951e-133.585e-13
14 .. 83.9243.1354.9137.822e-131.409e-125.429e-121.205e-110.86122.656True0.15692.9159.6390.1670.1501.2140.200 .. 5.00057.809 .. 214.1366.737e-139.499e-131.220e-131.310e-131.171e-13
7 .. 45.7074.9136.6292.968e-135.123e-132.900e-129.667e-121.1116.505True0.27263.1827.9490.2970.2531.7490.200 .. 5.00032.826 .. 73.9393.298e-135.193e-138.083e-148.803e-147.521e-14
5 .. 58.2996.62910.3901.126e-134.290e-133.496e-127.758e-121.10614.176True0.30844.3476.6590.3180.2711.7930.200 .. 5.00034.899 .. 74.7131.246e-132.020e-133.469e-143.580e-143.051e-14
0 .. 012.06810.39014.0184.275e-141.560e-131.868e-126.226e-120.77211.511True0.4238.4952.9150.4760.3551.8580.200 .. 5.00014.892 .. 40.2193.299e-147.943e-141.808e-142.035e-141.517e-14
1 .. 017.55014.01821.9711.622e-141.307e-132.252e-124.996e-120.4247.712True0.3063.1751.7820.3740.2341.3330.200 .. 5.0008.603 .. 41.0376.873e-152.163e-144.957e-156.068e-153.794e-15
0 .. 025.52121.97129.6456.156e-154.751e-141.203e-124.009e-120.0000.524True0.0000.0000.0000.2810.2811.1240.200 .. 5.0001.235 .. 18.3120.000e+006.921e-156.063e-211.730e-151.730e-15

Now we plot the flux points and their likelihood profiles. For the plotting of upper limits we choose a threshold of TS < 4.

[24]:
plt.figure(figsize=(8, 5))
flux_points.table["is_ul"] = flux_points.table["ts"] < 4
ax = flux_points.plot(
    energy_power=2, flux_unit="erg-1 cm-2 s-1", color="darkorange"
)
flux_points.to_sed_type("e2dnde").plot_ts_profiles(ax=ax)
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/estimators/flux_point.py:668: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/estimators/flux_point.py:669: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/estimators/flux_point.py:743: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3.  Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading'].  This will become an error two minor releases later.
  caxes = ax.pcolormesh(x.value, y_values.value, -z.T, **kwargs)
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/estimators/flux_point.py:744: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/estimators/flux_point.py:745: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
[24]:
<AxesSubplot:xlabel='Energy (TeV)', ylabel='e2dnde (erg / (cm2 s))'>
../_images/tutorials_spectrum_analysis_44_2.png

The final plot with the best fit model, flux points and residuals can be quickly made like this:

[25]:
flux_points_dataset = FluxPointsDataset(
    data=flux_points, models=model_best_joint
)
[26]:
plt.figure(figsize=(8, 6))
flux_points_dataset.peek();
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:323: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:324: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
[26]:
(<AxesSubplot:xlabel='Energy [TeV]', ylabel='E2 * Flux [erg / (cm2 s)]'>,
 <AxesSubplot:xlabel='Energy (TeV)', ylabel='Residuals '>)
../_images/tutorials_spectrum_analysis_47_2.png

Stack observations

And alternative approach to fitting the spectrum is stacking all observations first and the fitting a model. For this we first stack the individual datasets:

[27]:
dataset_stacked = Datasets(datasets).stack_reduce()
No background model defined for dataset _8-5z--U
No background model defined for dataset _8-5z--U
No background model defined for dataset _8-5z--U

Again we set the model on the dataset we would like to fit (in this case it’s only a single one) and pass it to the gammapy.modeling.Fit object:

[28]:
dataset_stacked.models = model
stacked_fit = Fit([dataset_stacked])
result_stacked = stacked_fit.run()

# make a copy to compare later
model_best_stacked = model.copy()
No background model defined for dataset _8-5z--U
[29]:
print(result_stacked)
OptimizeResult

        backend    : minuit
        method     : minuit
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 35
        total stat : 29.88

[30]:
model_best_joint.parameters.to_table()
[30]:
Table length=3
namevalueunitminmaxfrozenerror
str9float64str14float64float64boolfloat64
index2.588e+00nannanFalse5.945e-02
amplitude2.690e-11cm-2 s-1 TeV-1nannanFalse1.260e-12
reference1.000e+00TeVnannanTrue0.000e+00
[31]:
model_best_stacked.parameters.to_table()
[31]:
Table length=3
namevalueunitminmaxfrozenerror
str9float64str14float64float64boolfloat64
index2.591e+00nannanFalse6.678e-02
amplitude2.693e-11cm-2 s-1 TeV-1nannanFalse1.392e-12
reference1.000e+00TeVnannanTrue0.000e+00

Finally, we compare the results of our stacked analysis to a previously published Crab Nebula Spectrum for reference. This is available in gammapy.modeling.models.create_crab_spectral_model.

[32]:
plot_kwargs = {
    "energy_range": [0.1, 30] * u.TeV,
    "energy_power": 2,
    "flux_unit": "erg-1 cm-2 s-1",
}

# plot stacked model
model_best_stacked.spectral_model.plot(
    **plot_kwargs, label="Stacked analysis result"
)
model_best_stacked.spectral_model.plot_error(**plot_kwargs)

# plot joint model
model_best_joint.spectral_model.plot(
    **plot_kwargs, label="Joint analysis result", ls="--"
)
model_best_joint.spectral_model.plot_error(**plot_kwargs)

create_crab_spectral_model("hess_pl").plot(
    **plot_kwargs, label="Crab reference"
)
plt.legend()
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:323: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:324: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:323: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:324: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:323: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:324: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
[32]:
<matplotlib.legend.Legend at 0x7f7e35a7ef98>
../_images/tutorials_spectrum_analysis_56_2.png

Exercises

Now you have learned the basics of a spectral analysis with Gammapy. To practice you can continue with the following exercises:

What next?

The methods shown in this tutorial is valid for point-like or midly extended sources where we can assume that the IRF taken at the region center is valid over the whole region. If one wants to extract the 1D spectrum of a large source and properly average the response over the extraction region, one has to use a different approach explained in the extended source spectral analysis tutorial.

[33]: