This is a fixed-text formatted version of a Jupyter notebook

3D detailed analysis

This tutorial does a 3D map based analsis on the galactic center, using simulated observations from the CTA-1DC. We will use the high level interface for the data reduction, and then do a detailed modelling. This will be done in two different ways:

  • stacking all the maps together and fitting the stacked maps

  • handling all the observations separately and doing a joint fitting on all the maps

[1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import astropy.units as u
from pathlib import Path
from regions import CircleSkyRegion
from scipy.stats import norm
from gammapy.analysis import Analysis, AnalysisConfig
from gammapy.modeling.models import (
    SkyModel,
    ExpCutoffPowerLawSpectralModel,
    PointSpatialModel,
    FoVBackgroundModel,
    Models,
)
from gammapy.modeling import Fit
from gammapy.maps import Map
from gammapy.estimators import ExcessMapEstimator
from gammapy.datasets import MapDataset

Analysis configuration

In this section we select observations and define the analysis geometries, irrespective of joint/stacked analysis. For configuration of the analysis, we will programatically build a config file from scratch.

[2]:
config = AnalysisConfig()
# The config file is now empty, with only a few defaults specified.
print(config)
AnalysisConfig

    general:
        log: {level: info, filename: null, filemode: null, format: null, datefmt: null}
        outdir: .
    observations:
        datastore: $GAMMAPY_DATA/hess-dl3-dr1
        obs_ids: []
        obs_file: null
        obs_cone: {frame: null, lon: null, lat: null, radius: null}
        obs_time: {start: null, stop: null}
    datasets:
        type: 1d
        stack: true
        geom:
            wcs:
                skydir: {frame: null, lon: null, lat: null}
                binsize: 0.02 deg
                width: {width: 5.0 deg, height: 5.0 deg}
                binsize_irf: 0.2 deg
            selection: {offset_max: 2.5 deg}
            axes:
                energy: {min: 1.0 TeV, max: 10.0 TeV, nbins: 5}
                energy_true: {min: 0.5 TeV, max: 20.0 TeV, nbins: 16}
        map_selection: [counts, exposure, background, psf, edisp]
        background:
            method: null
            exclusion: null
            parameters: {}
        safe_mask:
            methods: [aeff-default]
            parameters: {}
        on_region: {frame: null, lon: null, lat: null, radius: null}
        containment_correction: true
    fit:
        fit_range: {min: null, max: null}
    flux_points:
        energy: {min: null, max: null, nbins: null}
        source: source
        parameters: {selection_optional: all}
    excess_map:
        correlation_radius: 0.1 deg
        parameters: {}
        energy_edges: {min: null, max: null, nbins: null}
    light_curve:
        time_intervals: {start: null, stop: null}
        energy_edges: {min: null, max: null, nbins: null}
        source: source
        parameters: {selection_optional: all}

[3]:
# Selecting the observations
config.observations.datastore = "$GAMMAPY_DATA/cta-1dc/index/gps/"
config.observations.obs_ids = [110380, 111140, 111159]
[4]:
# Defining a reference geometry for the reduced datasets

config.datasets.type = "3d"  # Analysis type is 3D

config.datasets.geom.wcs.skydir = {
    "lon": "0 deg",
    "lat": "0 deg",
    "frame": "galactic",
}  # The WCS geometry - centered on the galactic center
config.datasets.geom.wcs.width = {"width": "10 deg", "height": "8 deg"}
config.datasets.geom.wcs.binsize = "0.02 deg"

# Cutout size (for the run-wise event selection)
config.datasets.geom.selection.offset_max = 3.5 * u.deg
config.datasets.safe_mask.methods = ["aeff-default", "offset-max"]

# We now fix the energy axis for the counts map - (the reconstructed energy binning)
config.datasets.geom.axes.energy.min = "0.1 TeV"
config.datasets.geom.axes.energy.max = "10 TeV"
config.datasets.geom.axes.energy.nbins = 10

# We now fix the energy axis for the IRF maps (exposure, etc) - (the true enery binning)
config.datasets.geom.axes.energy_true.min = "0.08 TeV"
config.datasets.geom.axes.energy_true.max = "12 TeV"
config.datasets.geom.axes.energy_true.nbins = 14
[5]:
print(config)
AnalysisConfig

    general:
        log: {level: info, filename: null, filemode: null, format: null, datefmt: null}
        outdir: .
    observations:
        datastore: $GAMMAPY_DATA/cta-1dc/index/gps
        obs_ids: [110380, 111140, 111159]
        obs_file: null
        obs_cone: {frame: null, lon: null, lat: null, radius: null}
        obs_time: {start: null, stop: null}
    datasets:
        type: 3d
        stack: true
        geom:
            wcs:
                skydir: {frame: galactic, lon: 0.0 deg, lat: 0.0 deg}
                binsize: 0.02 deg
                width: {width: 10.0 deg, height: 8.0 deg}
                binsize_irf: 0.2 deg
            selection: {offset_max: 3.5 deg}
            axes:
                energy: {min: 0.1 TeV, max: 10.0 TeV, nbins: 10}
                energy_true: {min: 0.08 TeV, max: 12.0 TeV, nbins: 14}
        map_selection: [counts, exposure, background, psf, edisp]
        background:
            method: null
            exclusion: null
            parameters: {}
        safe_mask:
            methods: [aeff-default, offset-max]
            parameters: {}
        on_region: {frame: null, lon: null, lat: null, radius: null}
        containment_correction: true
    fit:
        fit_range: {min: null, max: null}
    flux_points:
        energy: {min: null, max: null, nbins: null}
        source: source
        parameters: {selection_optional: all}
    excess_map:
        correlation_radius: 0.1 deg
        parameters: {}
        energy_edges: {min: null, max: null, nbins: null}
    light_curve:
        time_intervals: {start: null, stop: null}
        energy_edges: {min: null, max: null, nbins: null}
        source: source
        parameters: {selection_optional: all}

Configuration for stacked and joint analysis

This is done just by specfiying the flag on config.datasets.stack. Since the internal machinery will work differently for the two cases, we will write it as two config files and save it to disc in YAML format for future reference.

[6]:
config_stack = config.copy(deep=True)
config_stack.datasets.stack = True

config_joint = config.copy(deep=True)
config_joint.datasets.stack = False
[7]:
# To prevent unnecessary cluttering, we write it in a separate folder.
path = Path("analysis_3d")
path.mkdir(exist_ok=True)
config_joint.write(path=path / "config_joint.yaml", overwrite=True)
config_stack.write(path=path / "config_stack.yaml", overwrite=True)

Stacked analysis

Data reduction

We first show the steps for the stacked analysis and then repeat the same for the joint analysis later

[8]:
# Reading yaml file:
config_stacked = AnalysisConfig.read(path=path / "config_stack.yaml")
[9]:
analysis_stacked = Analysis(config_stacked)
Setting logging config: {'level': 'INFO', 'filename': None, 'filemode': None, 'format': None, 'datefmt': None}
[10]:
%%time
# select observations:
analysis_stacked.get_observations()

# run data reduction
analysis_stacked.get_datasets()
Fetching observations.
Number of selected observations: 3
Creating reference dataset and makers.
Creating the background Maker.
No background maker set. Check configuration.
Start the data reduction loop.
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No default thresholds defined for obs 110380
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No default thresholds defined for obs 111140
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No default thresholds defined for obs 111159
CPU times: user 12 s, sys: 3.56 s, total: 15.6 s
Wall time: 15.5 s

We have one final dataset, which you can print and explore

[11]:
dataset_stacked = analysis_stacked.datasets["stacked"]
print(dataset_stacked)
MapDataset
----------

  Name                            : stacked

  Total counts                    : 121241
  Total background counts         : 108043.52
  Total excess counts             : 13197.48

  Predicted counts                : 108043.52
  Predicted background counts     : 108043.52
  Predicted excess counts         : nan

  Exposure min                    : 6.28e+07 m2 s
  Exposure max                    : 1.90e+10 m2 s

  Number of total bins            : 2000000
  Number of fit bins              : 1411180

  Fit statistic type              : cash
  Fit statistic value (-2 log(L)) : nan

  Number of models                : 0
  Number of parameters            : 0
  Number of free parameters       : 0


[12]:
# To plot a smooth counts map
dataset_stacked.counts.smooth(0.02 * u.deg).plot_interactive(add_cbar=True)
[13]:
# And the background map
dataset_stacked.background.plot_interactive(add_cbar=True)
[14]:
# You can also get an excess image with a few lines of code:
excess = dataset_stacked.excess.sum_over_axes()
excess.smooth("0.06 deg").plot(stretch="sqrt", add_cbar=True);
../../../_images/tutorials_analysis_3D_analysis_3d_20_0.png

Modeling and fitting

Now comes the interesting part of the analysis - choosing appropriate models for our source and fitting them.

We choose a point source model with an exponential cutoff power-law spectrum.

To perform the fit on a restricted energy range, we can create a specific mask. On the dataset, the mask_fit is a Map sharing the same geometry as the MapDataset and containing boolean data.

To create a mask to limit the fit within a restricted energy range, one can rely on the gammapy.maps.Geom.energy_mask() method.

For more details on masks and the techniques to create them in gammapy, please refer to the dedicated tutorial

[15]:
dataset_stacked.mask_fit = dataset_stacked.counts.geom.energy_mask(energy_min=0.3*u.TeV,
                            energy_max=None)
[16]:
spatial_model = PointSpatialModel(
    lon_0="-0.05 deg", lat_0="-0.05 deg", frame="galactic"
)
spectral_model = ExpCutoffPowerLawSpectralModel(
    index=2.3,
    amplitude=2.8e-12 * u.Unit("cm-2 s-1 TeV-1"),
    reference=1.0 * u.TeV,
    lambda_=0.02 / u.TeV,
)

model = SkyModel(
    spatial_model=spatial_model,
    spectral_model=spectral_model,
    name="gc-source",
)

bkg_model = FoVBackgroundModel(dataset_name="stacked")
bkg_model.spectral_model.norm.value = 1.3

models_stacked = Models([model, bkg_model])

dataset_stacked.models = models_stacked
[17]:
%%time
fit = Fit(optimize_opts={"print_level": 1})
result = fit.run(datasets=[dataset_stacked])
┌──────────────────────────────────┬──────────────────────────────────────┐
│ FCN = 1.805e+05                  │        Nfcn = 189 (189 total)        │
│ EDM = 2.84e-05 (Goal: 0.0002)    │                                      │
├───────────────┬──────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ Valid Parameters │        No Parameters at limit        │
├───────────────┴──────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10)  │           Below call limit           │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│   Hesse ok    │  Has Covariance  │ Accurate  │  Pos. def.  │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
CPU times: user 21.4 s, sys: 18.1 ms, total: 21.4 s
Wall time: 21.4 s

Fit quality assesment and model residuals for a MapDataset

We can access the results dictionary to see if the fit converged:

[18]:
print(result)
{'optimize_result': OptimizeResult

        backend    : minuit
        method     : minuit
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 189
        total stat : 180458.12
, 'covariance_result': CovarianceResult

        backend    : minuit
        method     : hesse
        success    : True
        message    : Hesse terminated successfully.
}

Check best-fit parameters and error estimates:

[19]:
result["optimize_result"].parameters.to_table()
[19]:
Table length=10
typenamevalueuniterrorminmaxfrozen
str8str9float64str14float64float64float64bool
spectralindex2.4147e+001.523e-01nannanFalse
spectralamplitude2.6627e-12cm-2 s-1 TeV-13.102e-13nannanFalse
spectralreference1.0000e+00TeV0.000e+00nannanTrue
spectrallambda_-1.3499e-02TeV-16.839e-02nannanFalse
spectralalpha1.0000e+000.000e+00nannanTrue
spatiallon_0-4.8059e-02deg2.598e-03nannanFalse
spatiallat_0-5.2606e-02deg2.256e-03-9.000e+019.000e+01False
spectralnorm1.3481e+009.314e-03nannanFalse
spectraltilt0.0000e+000.000e+00nannanTrue
spectralreference1.0000e+00TeV0.000e+00nannanTrue

A quick way to inspect the model residuals is using the function ~MapDataset.plot_residuals_spatial(). This function computes and plots a residual image (by default, the smoothing radius is 0.1 deg and method=diff, which corresponds to a simple data - model plot):

[20]:
dataset_stacked.plot_residuals_spatial(
    method="diff/sqrt(model)", vmin=-1, vmax=1
);
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/astropy/visualization/wcsaxes/core.py:211: MatplotlibDeprecationWarning: Passing parameters norm and vmin/vmax simultaneously is deprecated since 3.3 and will become an error two minor releases later. Please pass vmin/vmax directly to the norm when creating it.
  return super().imshow(X, *args, origin=origin, **kwargs)
../../../_images/tutorials_analysis_3D_analysis_3d_32_1.png

The more general function ~MapDataset.plot_residuals() can also extract and display spectral residuals in a region:

[21]:
region = CircleSkyRegion(spatial_model.position, radius=0.15 * u.deg)

dataset_stacked.plot_residuals(
    kwargs_spatial=dict(method="diff/sqrt(model)", vmin=-1, vmax=1),
    kwargs_spectral=dict(region=region),
);
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/astropy/visualization/wcsaxes/core.py:211: MatplotlibDeprecationWarning: Passing parameters norm and vmin/vmax simultaneously is deprecated since 3.3 and will become an error two minor releases later. Please pass vmin/vmax directly to the norm when creating it.
  return super().imshow(X, *args, origin=origin, **kwargs)
../../../_images/tutorials_analysis_3D_analysis_3d_34_1.png

This way of accessing residuals is quick and handy, but comes with limitations. For example: - In case a fitting energy range was defined using a MapDataset.mask_fit, it won’t be taken into account. Residuals will be summed up over the whole reconstructed energy range - In order to make a proper statistic treatment, instead of simple residuals a proper residuals significance map should be computed

A more accurate way to inspect spatial residuals is the following:

[22]:
estimator = ExcessMapEstimator(
    correlation_radius="0.1 deg",
    selection_optional=[],
    energy_edges=[0.1, 1, 10] * u.TeV,
)

result = estimator.run(dataset_stacked)

result["sqrt_ts"].plot_grid(
    figsize=(12, 4), cmap="coolwarm", add_cbar=True, vmin=-5, vmax=5, ncols=2
);
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/astropy/visualization/wcsaxes/core.py:211: MatplotlibDeprecationWarning: Passing parameters norm and vmin/vmax simultaneously is deprecated since 3.3 and will become an error two minor releases later. Please pass vmin/vmax directly to the norm when creating it.
  return super().imshow(X, *args, origin=origin, **kwargs)
../../../_images/tutorials_analysis_3D_analysis_3d_36_1.png

Distribution of residuals significance in the full map geometry:

[23]:
# TODO: clean this up
significance_data = result["sqrt_ts"].data

# #Remove bins that are inside an exclusion region, that would create an artificial peak at significance=0.
# #For now these lines are useless, because to_image() drops the mask fit
# mask_data = dataset_image.mask_fit.sum_over_axes().data
# excluded = mask_data == 0
# significance_data = significance_data[~excluded]
selection = np.isfinite(significance_data) & ~(significance_data == 0)
significance_data = significance_data[selection]

plt.hist(significance_data, density=True, alpha=0.9, color="red", bins=40)
mu, std = norm.fit(significance_data)

x = np.linspace(-5, 5, 100)
p = norm.pdf(x, mu, std)

plt.plot(
    x,
    p,
    lw=2,
    color="black",
    label=r"$\mu$ = {:.2f}, $\sigma$ = {:.2f}".format(mu, std),
)
plt.legend(fontsize=17)
plt.xlim(-5, 5)
[23]:
(-5.0, 5.0)
../../../_images/tutorials_analysis_3D_analysis_3d_38_1.png

Joint analysis

In this section, we perform a joint analysis of the same data. Of course, joint fitting is considerably heavier than stacked one, and should always be handled with care. For brevity, we only show the analysis for a point source fitting without re-adding a diffuse component again.

Data reduction

[24]:
%%time

# Read the yaml file from disk
config_joint = AnalysisConfig.read(path=path / "config_joint.yaml")
analysis_joint = Analysis(config_joint)

# select observations:
analysis_joint.get_observations()

# run data reduction
analysis_joint.get_datasets()
Setting logging config: {'level': 'INFO', 'filename': None, 'filemode': None, 'format': None, 'datefmt': None}
Fetching observations.
Number of selected observations: 3
Creating reference dataset and makers.
Creating the background Maker.
No background maker set. Check configuration.
Start the data reduction loop.
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No default thresholds defined for obs 110380
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No default thresholds defined for obs 111140
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No default thresholds defined for obs 111159
CPU times: user 11.5 s, sys: 3.52 s, total: 15 s
Wall time: 15 s
[25]:
# You can see there are 3 datasets now
print(analysis_joint.datasets)
Datasets
--------

Dataset 0:

  Type       : MapDataset
  Name       : vvGHWnyP
  Instrument : CTA
  Models     :

Dataset 1:

  Type       : MapDataset
  Name       : 5uUBqLNE
  Instrument : CTA
  Models     :

Dataset 2:

  Type       : MapDataset
  Name       : FG7qM8Yn
  Instrument : CTA
  Models     :


[26]:
# You can access each one by name or by index, eg:
print(analysis_joint.datasets[0])
MapDataset
----------

  Name                            : vvGHWnyP

  Total counts                    : 40481
  Total background counts         : 36014.51
  Total excess counts             : 4466.49

  Predicted counts                : 36014.51
  Predicted background counts     : 36014.51
  Predicted excess counts         : nan

  Exposure min                    : 6.28e+07 m2 s
  Exposure max                    : 6.68e+09 m2 s

  Number of total bins            : 1085000
  Number of fit bins              : 693940

  Fit statistic type              : cash
  Fit statistic value (-2 log(L)) : nan

  Number of models                : 0
  Number of parameters            : 0
  Number of free parameters       : 0


After the data reduction stage, it is nice to get a quick summary info on the datasets. Here, we look at the statistics in the center of Map, by passing an appropriate region. To get info on the entire spatial map, omit the region argument.

[27]:
analysis_joint.datasets.info_table()
[27]:
Table length=3
namecountsbackgroundexcesssqrt_tsnprednpred_backgroundnpred_signalexposure_minexposure_maxlivetimeontimecounts_ratebackground_rateexcess_raten_binsn_fit_binsstat_typestat_sum
m2 sm2 sss1 / s1 / s1 / s
str8float32float64float64float64float64float64float64float64float64float64float64float64float64float64int64int64str4float64
stacked40481.036014.5156254466.48437523.0727554316333836014.5069602113236014.515625nan62838924.06676807680.01764.0000343261800.022.94841225185532720.4163916803780842.53202057147724571085000693940cashnan
5uUBqLNE40525.036014.494476804094510.50552319591223.29577828820858836014.4944768040936014.49447680409nan62838194.409848076676808531.8046681764.0000343261800.022.9733555620275520.4163796916051232.55697587042242521085000693940cashnan
FG7qM8Yn40235.036014.519445033844220.48055496616121.8249642495548736014.5194450338436014.51944503384nan62838404.535743246676813626.9170531764.0000343261800.022.808956472256120.4163938459301022.3925626263259961085000693940cashnan
[28]:
models_joint = Models()

model_joint = model.copy(name="source-joint")
models_joint.append(model_joint)

for dataset in analysis_joint.datasets:
    bkg_model = FoVBackgroundModel(dataset_name=dataset.name)
    models_joint.append(bkg_model)

print(models_joint)
Models

Component 0: SkyModel

  Name                      : source-joint
  Datasets names            : None
  Spectral model type       : ExpCutoffPowerLawSpectralModel
  Spatial  model type       : PointSpatialModel
  Temporal model type       :
  Parameters:
    index                   :   2.415
    amplitude               :   2.66e-12  1 / (cm2 s TeV)
    reference    (frozen)   :   1.000  TeV
    lambda_                 :  -0.013  1 / TeV
    alpha        (frozen)   :   1.000
    lon_0                   :  -0.048  deg
    lat_0                   :  -0.053  deg

Component 1: FoVBackgroundModel

  Name                      : vvGHWnyP-bkg
  Datasets names            : ['vvGHWnyP']
  Spectral model type       : PowerLawNormSpectralModel
  Parameters:
    norm                    :   1.000
    tilt         (frozen)   :   0.000
    reference    (frozen)   :   1.000  TeV

Component 2: FoVBackgroundModel

  Name                      : 5uUBqLNE-bkg
  Datasets names            : ['5uUBqLNE']
  Spectral model type       : PowerLawNormSpectralModel
  Parameters:
    norm                    :   1.000
    tilt         (frozen)   :   0.000
    reference    (frozen)   :   1.000  TeV

Component 3: FoVBackgroundModel

  Name                      : FG7qM8Yn-bkg
  Datasets names            : ['FG7qM8Yn']
  Spectral model type       : PowerLawNormSpectralModel
  Parameters:
    norm                    :   1.000
    tilt         (frozen)   :   0.000
    reference    (frozen)   :   1.000  TeV


[29]:
# and set the new model
analysis_joint.datasets.models = models_joint
[30]:
%%time
fit_joint = Fit()
result_joint = fit_joint.run(datasets=analysis_joint.datasets)
CPU times: user 37.8 s, sys: 109 ms, total: 37.9 s
Wall time: 37.9 s

Fit quality assessment and model residuals for a joint Datasets

We can access the results dictionary to see if the fit converged:

[31]:
print(result_joint)
{'optimize_result': OptimizeResult

        backend    : minuit
        method     : minuit
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 267
        total stat : 748259.16
, 'covariance_result': CovarianceResult

        backend    : minuit
        method     : hesse
        success    : True
        message    : Hesse terminated successfully.
}

Check best-fit parameters and error estimates:

[32]:
print(models_joint)
Models

Component 0: SkyModel

  Name                      : source-joint
  Datasets names            : None
  Spectral model type       : ExpCutoffPowerLawSpectralModel
  Spatial  model type       : PointSpatialModel
  Temporal model type       :
  Parameters:
    index                   :   2.272
    amplitude               :   2.84e-12  1 / (cm2 s TeV)
    reference    (frozen)   :   1.000  TeV
    lambda_                 :   0.039  1 / TeV
    alpha        (frozen)   :   1.000
    lon_0                   :  -0.049  deg
    lat_0                   :  -0.053  deg

Component 1: FoVBackgroundModel

  Name                      : vvGHWnyP-bkg
  Datasets names            : ['vvGHWnyP']
  Spectral model type       : PowerLawNormSpectralModel
  Parameters:
    norm                    :   1.118
    tilt         (frozen)   :   0.000
    reference    (frozen)   :   1.000  TeV

Component 2: FoVBackgroundModel

  Name                      : 5uUBqLNE-bkg
  Datasets names            : ['5uUBqLNE']
  Spectral model type       : PowerLawNormSpectralModel
  Parameters:
    norm                    :   1.119
    tilt         (frozen)   :   0.000
    reference    (frozen)   :   1.000  TeV

Component 3: FoVBackgroundModel

  Name                      : FG7qM8Yn-bkg
  Datasets names            : ['FG7qM8Yn']
  Spectral model type       : PowerLawNormSpectralModel
  Parameters:
    norm                    :   1.111
    tilt         (frozen)   :   0.000
    reference    (frozen)   :   1.000  TeV


Since the joint dataset is made of multiple datasets, we can either: - Look at the residuals for each dataset separately. In this case, we can directly refer to the section Fit quality and model residuals for a MapDataset in this notebook - Look at a stacked residual map.

In the latter case, we need to properly stack the joint dataset before computing the residuals:

[33]:
# TODO: clean this up

# We need to stack on the full geometry, so we use to geom from the stacked counts map.
stacked = MapDataset.from_geoms(**dataset_stacked.geoms)

for dataset in analysis_joint.datasets:
    # TODO: Apply mask_fit before stacking
    stacked.stack(dataset)

stacked.models = [model_joint]
[34]:
stacked.plot_residuals_spatial(vmin=-1, vmax=1);
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/astropy/visualization/wcsaxes/core.py:211: MatplotlibDeprecationWarning: Passing parameters norm and vmin/vmax simultaneously is deprecated since 3.3 and will become an error two minor releases later. Please pass vmin/vmax directly to the norm when creating it.
  return super().imshow(X, *args, origin=origin, **kwargs)
../../../_images/tutorials_analysis_3D_analysis_3d_56_1.png

Then, we can access the stacked model residuals as previously shown in the section Fit quality and model residuals for a MapDataset in this notebook.

Finally, let us compare the spectral results from the stacked and joint fit:

[35]:
def plot_spectrum(model, result, label, color):
    spec = model.spectral_model
    energy_bounds = [0.3, 10] * u.TeV
    spec.plot(
        energy_bounds=energy_bounds, energy_power=2, label=label, color=color
    )
    spec.plot_error(energy_bounds=energy_bounds, energy_power=2, color=color)
[36]:
plot_spectrum(model, result, label="stacked", color="tab:blue")
plot_spectrum(model_joint, result_joint, label="joint", color="tab:orange")
plt.legend()
[36]:
<matplotlib.legend.Legend at 0x7f42b8ba4940>
../../../_images/tutorials_analysis_3D_analysis_3d_60_1.png

Summary

Note that this notebook aims to show you the procedure of a 3D analysis using just a few observations. Results get much better for a more complete analysis considering the GPS dataset from the CTA First Data Challenge (DC-1) and also the CTA model for the Galactic diffuse emission, as shown in the next image:

../../../_images/DC1_3d.png

The complete tutorial notebook of this analysis is available to be downloaded in GAMMAPY-EXTRA repository at https://github.com/gammapy/gammapy-extra/blob/master/analyses/cta_1dc_gc_3d.ipynb).

Exercises

  • Analyse the second source in the field of view: G0.9+0.1 and add it to the combined model.

  • Perform modeling in more details - Add diffuse component, get flux points.