This is a fixed-text formatted version of a Jupyter notebook

3D analysis

This tutorial does a 3D map based analsis on the galactic center, using simulated observations from the CTA-1DC. We will use the high level interface for the data reduction, and then do a detailed modelling. This will be done in two different ways:

  • stacking all the maps together and fitting the stacked maps

  • handling all the observations separately and doing a joint fitting on all the maps

[1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import astropy.units as u
from pathlib import Path
from regions import CircleSkyRegion
from scipy.stats import norm
from gammapy.analysis import Analysis, AnalysisConfig
from gammapy.modeling.models import (
    SkyModel,
    ExpCutoffPowerLawSpectralModel,
    PointSpatialModel,
)
from gammapy.modeling import Fit
from gammapy.maps import Map
from gammapy.estimators import ExcessMapEstimator
from gammapy.datasets import MapDataset

Analysis configuration

In this section we select observations and define the analysis geometries, irrespective of joint/stacked analysis. For configuration of the analysis, we will programatically build a config file from scratch.

[2]:
config = AnalysisConfig()
# The config file is now empty, with only a few defaults specified.
print(config)
AnalysisConfig

    general:
        log: {level: info, filename: null, filemode: null, format: null, datefmt: null}
        outdir: .
    observations:
        datastore: $GAMMAPY_DATA/hess-dl3-dr1
        obs_ids: []
        obs_file: null
        obs_cone: {frame: null, lon: null, lat: null, radius: null}
        obs_time: {start: null, stop: null}
    datasets:
        type: 1d
        stack: true
        geom:
            wcs:
                skydir: {frame: null, lon: null, lat: null}
                binsize: 0.02 deg
                fov: {width: 5.0 deg, height: 5.0 deg}
                binsize_irf: 0.2 deg
            selection: {offset_max: 2.5 deg}
            axes:
                energy: {min: 0.1 TeV, max: 10.0 TeV, nbins: 30}
                energy_true: {min: 0.1 TeV, max: 10.0 TeV, nbins: 30}
        map_selection: [counts, exposure, background, psf, edisp]
        background:
            method: null
            exclusion: null
            parameters: {}
        safe_mask:
            methods: [aeff-default]
            parameters: {}
        on_region: {frame: null, lon: null, lat: null, radius: null}
        containment_correction: true
    fit:
        fit_range: {min: 0.1 TeV, max: 10.0 TeV}
    flux_points:
        energy: {min: 0.1 TeV, max: 10.0 TeV, nbins: 30}
        source: source
        parameters: {}

[3]:
# Selecting the observations
config.observations.datastore = "$GAMMAPY_DATA/cta-1dc/index/gps/"
config.observations.obs_ids = [110380, 111140, 111159]
[4]:
# Defining a reference geometry for the reduced datasets

config.datasets.type = "3d"  # Analysis type is 3D

config.datasets.geom.wcs.skydir = {
    "lon": "0 deg",
    "lat": "0 deg",
    "frame": "galactic",
}  # The WCS geometry - centered on the galactic center
config.datasets.geom.wcs.fov = {"width": "10 deg", "height": "8 deg"}
config.datasets.geom.wcs.binsize = "0.02 deg"

# The FoV radius to use for cutouts
config.datasets.geom.selection.offset_max = 3.5 * u.deg
config.datasets.safe_mask.methods = ["aeff-default", "offset-max"]

# We now fix the energy axis for the counts map - (the reconstructed energy binning)
config.datasets.geom.axes.energy.min = "0.1 TeV"
config.datasets.geom.axes.energy.max = "10 TeV"
config.datasets.geom.axes.energy.nbins = 10

# We now fix the energy axis for the IRF maps (exposure, etc) - (the true enery binning)
config.datasets.geom.axes.energy_true.min = "0.08 TeV"
config.datasets.geom.axes.energy_true.max = "12 TeV"
config.datasets.geom.axes.energy_true.nbins = 14
[5]:
print(config)
AnalysisConfig

    general:
        log: {level: info, filename: null, filemode: null, format: null, datefmt: null}
        outdir: .
    observations:
        datastore: $GAMMAPY_DATA/cta-1dc/index/gps
        obs_ids: [110380, 111140, 111159]
        obs_file: null
        obs_cone: {frame: null, lon: null, lat: null, radius: null}
        obs_time: {start: null, stop: null}
    datasets:
        type: 3d
        stack: true
        geom:
            wcs:
                skydir: {frame: galactic, lon: 0.0 deg, lat: 0.0 deg}
                binsize: 0.02 deg
                fov: {width: 10.0 deg, height: 8.0 deg}
                binsize_irf: 0.2 deg
            selection: {offset_max: 3.5 deg}
            axes:
                energy: {min: 0.1 TeV, max: 10.0 TeV, nbins: 10}
                energy_true: {min: 0.08 TeV, max: 12.0 TeV, nbins: 14}
        map_selection: [counts, exposure, background, psf, edisp]
        background:
            method: null
            exclusion: null
            parameters: {}
        safe_mask:
            methods: [aeff-default, offset-max]
            parameters: {}
        on_region: {frame: null, lon: null, lat: null, radius: null}
        containment_correction: true
    fit:
        fit_range: {min: 0.1 TeV, max: 10.0 TeV}
    flux_points:
        energy: {min: 0.1 TeV, max: 10.0 TeV, nbins: 30}
        source: source
        parameters: {}

Configuration for stacked and joint analysis

This is done just by specfiying the flag on config.datasets.stack. Since the internal machinery will work differently for the two cases, we will write it as two config files and save it to disc in YAML format for future reference.

[6]:
config_stack = config.copy(deep=True)
config_stack.datasets.stack = True

config_joint = config.copy(deep=True)
config_joint.datasets.stack = False
[7]:
# To prevent unnecessary cluttering, we write it in a separate folder.
path = Path("analysis_3d")
path.mkdir(exist_ok=True)
config_joint.write(path=path / "config_joint.yaml", overwrite=True)
config_stack.write(path=path / "config_stack.yaml", overwrite=True)

Stacked analysis

Data reduction

We first show the steps for the stacked analysis and then repeat the same for the joint analysis later

[8]:
# Reading yaml file:
config_stacked = AnalysisConfig.read(path=path / "config_stack.yaml")
[9]:
analysis_stacked = Analysis(config_stacked)
Setting logging config: {'level': 'INFO', 'filename': None, 'filemode': None, 'format': None, 'datefmt': None}
[10]:
%%time
# select observations:
analysis_stacked.get_observations()

# run data reduction
analysis_stacked.get_datasets()
Fetching observations.
Number of selected observations: 3
Creating geometry.
Creating datasets.
No background maker set for 3d analysis. Check configuration.
Processing observation 110380
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No thresholds defined for obs 110380
Processing observation 111140
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No thresholds defined for obs 111140
Processing observation 111159
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No thresholds defined for obs 111159
CPU times: user 16.9 s, sys: 4.17 s, total: 21.1 s
Wall time: 21.1 s

We have one final dataset, which you can print and explore

[11]:
dataset_stacked = analysis_stacked.datasets["stacked"]
print(dataset_stacked)
MapDataset
----------

  Name                            : stacked

  Total counts                    : 121241
  Total predicted counts          : 108043.52
  Total background counts         : 108043.52

  Exposure min                    : 6.28e+07 m2 s
  Exposure max                    : 1.90e+10 m2 s

  Number of total bins            : 2000000
  Number of fit bins              : 1411180

  Fit statistic type              : cash
  Fit statistic value (-2 log(L)) : 625436.39

  Number of models                : 1
  Number of parameters            : 3
  Number of free parameters       : 1

  Component 0: BackgroundModel

    Name                      : stacked-bkg
    Datasets names            : ['stacked']
    Parameters:
      norm                    :   1.000
      tilt         (frozen)   :   0.000
      reference    (frozen)   :   1.000  TeV


[12]:
# To plot a smooth counts map
dataset_stacked.counts.smooth(0.02 * u.deg).plot_interactive(add_cbar=True)
[13]:
# And the background map
dataset_stacked.background_model.map.plot_interactive(add_cbar=True)
[14]:
# You can also get an excess image with a few lines of code:
counts = dataset_stacked.counts.sum_over_axes()
background = dataset_stacked.background_model.map.sum_over_axes()
excess = counts - background
excess.smooth("0.06 deg").plot(stretch="sqrt", add_cbar=True);
[14]:
(<Figure size 432x288 with 2 Axes>,
 <WCSAxesSubplot:xlabel='Galactic Longitude', ylabel='Galactic Latitude'>,
 <matplotlib.colorbar.Colorbar at 0x7faa822bdf28>)
../_images/tutorials_analysis_3d_20_1.png

Modeling and fitting

Now comes the interesting part of the analysis - choosing appropriate models for our source and fitting them.

We choose a point source model with an exponential cutoff power-law spectrum.

To select a certain energy range for the fit we can create a fit mask:

[15]:
coords = dataset_stacked.counts.geom.get_coord()
mask_energy = coords["energy"] > 0.3 * u.TeV
dataset_stacked.mask_fit = Map.from_geom(
    geom=dataset_stacked.counts.geom, data=mask_energy
)
[16]:
spatial_model = PointSpatialModel(
    lon_0="-0.05 deg", lat_0="-0.05 deg", frame="galactic"
)
spectral_model = ExpCutoffPowerLawSpectralModel(
    index=2.3,
    amplitude=2.8e-12 * u.Unit("cm-2 s-1 TeV-1"),
    reference=1.0 * u.TeV,
    lambda_=0.02 / u.TeV,
)

model = SkyModel(
    spatial_model=spatial_model,
    spectral_model=spectral_model,
    name="gc-source",
)

dataset_stacked.models.append(model)
dataset_stacked.background_model.norm.value = 1.3
[17]:
%%time
fit = Fit([dataset_stacked])
result = fit.run(optimize_opts={"print_level": 1})
------------------------------------------------------------------
| FCN = 2.802e+05               |     Ncalls=182 (182 total)     |
| EDM = 5.09e-05 (Goal: 0.0002) |            up = 1.0            |
------------------------------------------------------------------
|  Valid Min.   | Valid Param.  | Above EDM | Reached call limit |
------------------------------------------------------------------
|     True      |     True      |   False   |       False        |
------------------------------------------------------------------
| Hesse failed  |   Has cov.    | Accurate  | Pos. def. | Forced |
------------------------------------------------------------------
|     False     |     True      |   True    |   True    | False  |
------------------------------------------------------------------
CPU times: user 7.97 s, sys: 12.8 ms, total: 7.98 s
Wall time: 7.98 s

Fit quality assesment and model residuals for a MapDataset

We can access the results dictionary to see if the fit converged:

[18]:
print(result)
OptimizeResult

        backend    : minuit
        method     : minuit
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 182
        total stat : 280219.17

Check best-fit parameters and error estimates:

[19]:
result.parameters.to_table()
[19]:
Table length=10
namevalueunitminmaxfrozenerror
str9float64str14float64float64boolfloat64
norm1.248e+000.000e+00nanFalse6.448e-03
tilt0.000e+00nannanTrue0.000e+00
reference1.000e+00TeVnannanTrue0.000e+00
index2.272e+00nannanFalse1.151e-01
amplitude2.838e-12cm-2 s-1 TeV-1nannanFalse3.275e-13
reference1.000e+00TeVnannanTrue0.000e+00
lambda_3.689e-02TeV-1nannanFalse6.117e-02
alpha1.000e+00nannanTrue0.000e+00
lon_0-5.164e-02degnannanFalse2.341e-03
lat_0-5.251e-02deg-9.000e+019.000e+01False2.249e-03

A quick way to inspect the model residuals is using the function ~MapDataset.plot_residuals(). This function computes and plots a residual image (by default, the smoothing radius is 0.1 deg and method=diff, which corresponds to a simple data - model plot):

[20]:
dataset_stacked.plot_residuals(method="diff/sqrt(model)", vmin=-1, vmax=1)
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/astropy/visualization/wcsaxes/core.py:211: MatplotlibDeprecationWarning: Passing parameters norm and vmin/vmax simultaneously is deprecated since 3.3 and will become an error two minor releases later. Please pass vmin/vmax directly to the norm when creating it.
  return super().imshow(X, *args, origin=origin, **kwargs)
[20]:
(<WCSAxesSubplot:xlabel='Galactic Longitude', ylabel='Galactic Latitude'>,
 None)
../_images/tutorials_analysis_3d_31_2.png

The same function can also extract and display spectral residuals, in case a region (used for the spectral extraction) is passed:

[21]:
region = CircleSkyRegion(spatial_model.position, radius=0.15 * u.deg)

dataset_stacked.plot_residuals(
    method="diff/sqrt(model)", vmin=-1, vmax=1, region=region
)
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/astropy/visualization/wcsaxes/core.py:211: MatplotlibDeprecationWarning: Passing parameters norm and vmin/vmax simultaneously is deprecated since 3.3 and will become an error two minor releases later. Please pass vmin/vmax directly to the norm when creating it.
  return super().imshow(X, *args, origin=origin, **kwargs)
[21]:
(<WCSAxesSubplot:xlabel='Galactic Longitude', ylabel='Galactic Latitude'>,
 <AxesSubplot:xlabel='Energy [TeV]', ylabel='Residuals ((data - model) / sqrt(model))'>)
../_images/tutorials_analysis_3d_33_2.png

This way of accessing residuals is quick and handy, but comes with limitations. For example: - In case a fitting energy range was defined using a MapDataset.mask_fit, it won’t be taken into account. Residuals will be summed up over the whole reconstructed energy range - In order to make a proper statistic treatment, instead of simple residuals a proper residuals significance map should be computed

A more accurate way to inspect spatial residuals is the following:

[22]:
# TODO: clean this up
estimator = ExcessMapEstimator(
    correlation_radius="0.1 deg", selection_optional=[]
)
dataset_image = dataset_stacked.to_image()
estimator_dict = estimator.run(dataset_image)

residuals_significance = estimator_dict["significance"]
residuals_significance.sum_over_axes().plot(cmap="coolwarm", add_cbar=True)
/usr/share/miniconda/envs/gammapy-dev/lib/python3.7/site-packages/astropy/units/quantity.py:477: RuntimeWarning: divide by zero encountered in true_divide
  result = super().__array_ufunc__(function, method, *arrays, **kwargs)
[22]:
(<Figure size 432x288 with 2 Axes>,
 <WCSAxesSubplot:xlabel='Galactic Longitude', ylabel='Galactic Latitude'>,
 <matplotlib.colorbar.Colorbar at 0x7faa80c20f98>)
../_images/tutorials_analysis_3d_35_2.png

Distribution of residuals significance in the full map geometry:

[23]:
# TODO: clean this up
significance_data = residuals_significance.data

# #Remove bins that are inside an exclusion region, that would create an artificial peak at significance=0.
# #For now these lines are useless, because to_image() drops the mask fit
# mask_data = dataset_image.mask_fit.sum_over_axes().data
# excluded = mask_data == 0
# significance_data = significance_data[~excluded]
significance_data = significance_data.flatten()

plt.hist(significance_data, density=True, alpha=0.9, color="red", bins=30)
mu, std = norm.fit(significance_data)
x = np.linspace(
    np.min(significance_data) - 1, np.max(significance_data) + 1, 50
)
p = norm.pdf(x, mu, std)
plt.plot(
    x,
    p,
    lw=2,
    color="black",
    label=r"$\mu$ = {:.2f}, $\sigma$ = {:.2f}".format(mu, std),
)
plt.legend(fontsize=17)
plt.xlim(-6, 10)
[23]:
(-6.0, 10.0)
../_images/tutorials_analysis_3d_37_1.png

Joint analysis

In this section, we perform a joint analysis of the same data. Of course, joint fitting is considerably heavier than stacked one, and should always be handled with care. For brevity, we only show the analysis for a point source fitting without re-adding a diffuse component again.

Data reduction

[24]:
%%time

# Read the yaml file from disk
config_joint = AnalysisConfig.read(path=path / "config_joint.yaml")
analysis_joint = Analysis(config_joint)

# select observations:
analysis_joint.get_observations()

# run data reduction
analysis_joint.get_datasets()
Setting logging config: {'level': 'INFO', 'filename': None, 'filemode': None, 'format': None, 'datefmt': None}
Fetching observations.
Number of selected observations: 3
Creating geometry.
Creating datasets.
No background maker set for 3d analysis. Check configuration.
Processing observation 110380
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No thresholds defined for obs 110380
Processing observation 111140
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No thresholds defined for obs 111140
Processing observation 111159
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
Invalid unit found in background table! Assuming (s-1 MeV-1 sr-1)
No thresholds defined for obs 111159
CPU times: user 12.7 s, sys: 2.99 s, total: 15.7 s
Wall time: 15.7 s
[25]:
# You can see there are 3 datasets now
print(analysis_joint.datasets)
Datasets
--------

Dataset 0:

  Type       : MapDataset
  Name       : Hu6UV8Vt
  Instrument : CTA

Dataset 1:

  Type       : MapDataset
  Name       : MVvwQCr8
  Instrument : CTA

Dataset 2:

  Type       : MapDataset
  Name       : Af5rwioY
  Instrument : CTA


[26]:
# You can access each one by name or by index, eg:
print(analysis_joint.datasets[0])
MapDataset
----------

  Name                            : Hu6UV8Vt

  Total counts                    : 47736
  Total predicted counts          : 42655.01
  Total background counts         : 42655.01

  Exposure min                    : 6.28e+07 m2 s
  Exposure max                    : 6.68e+09 m2 s

  Number of total bins            : 1085000
  Number of fit bins              : 693940

  Fit statistic type              : cash
  Fit statistic value (-2 log(L)) : 250423.04

  Number of models                : 1
  Number of parameters            : 3
  Number of free parameters       : 1

  Component 0: BackgroundModel

    Name                      : Hu6UV8Vt-bkg
    Datasets names            : ['Hu6UV8Vt']
    Parameters:
      norm                    :   1.000
      tilt         (frozen)   :   0.000
      reference    (frozen)   :   1.000  TeV


After the data reduction stage, it is nice to get a quick summary info on the datasets. Here, we look at the statistics in the center of Map, by passing an appropriate region. To get info on the entire spatial map, omit the region argument.

[27]:
analysis_joint.datasets.info_table()
[27]:
Table length=3
namelivetimen_onbackgroundexcesssignificancebackground_rategamma_rate
s1 / s1 / s
str8float64float32float64float64float64float64float64
Hu6UV8Vt1800.047736.042655.0059226546445080.994077345354524.13658748849982823.6972255125859132.822774487414086
MVvwQCr81800.047867.042653.6081036203255213.39189637967124.75274178877260723.6964489464557352.8963288313220397
Af5rwioY1800.047518.042654.5297106493744863.47028935061323.12111176832151223.6969609503607652.7019279385281183
[28]:
# Add the model on each of the datasets
model_joint = model.copy(name="source-joint")
for dataset in analysis_joint.datasets:
    dataset.models.append(model_joint)
    dataset.background_model.norm.value = 1.1
[29]:
%%time
fit_joint = Fit(analysis_joint.datasets)
result_joint = fit_joint.run()
CPU times: user 17.9 s, sys: 23.1 ms, total: 18 s
Wall time: 18 s

Fit quality assessment and model residuals for a joint Datasets

We can access the results dictionary to see if the fit converged:

[30]:
print(result_joint)
OptimizeResult

        backend    : minuit
        method     : minuit
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 221
        total stat : 748241.92

Check best-fit parameters and error estimates:

[31]:
result_joint.parameters.to_table()
[31]:
Table length=16
namevalueunitminmaxfrozenerror
str9float64str14float64float64boolfloat64
norm1.117e+000.000e+00nanFalse5.577e-03
tilt0.000e+00nannanTrue0.000e+00
reference1.000e+00TeVnannanTrue0.000e+00
index2.297e+00nannanFalse7.633e-02
amplitude2.860e-12cm-2 s-1 TeV-1nannanFalse3.096e-13
reference1.000e+00TeVnannanTrue0.000e+00
lambda_2.996e-02TeV-1nannanFalse5.140e-02
alpha1.000e+00nannanTrue0.000e+00
lon_0-5.240e-02degnannanFalse2.228e-03
lat_0-5.255e-02deg-9.000e+019.000e+01False2.150e-03
norm1.119e+000.000e+00nanFalse5.580e-03
tilt0.000e+00nannanTrue0.000e+00
reference1.000e+00TeVnannanTrue0.000e+00
norm1.111e+000.000e+00nanFalse5.561e-03
tilt0.000e+00nannanTrue0.000e+00
reference1.000e+00TeVnannanTrue0.000e+00

The information on which parameter belongs to which dataset is not listed explicitly in the table (yet), but the order of parameters is conserved. You can always access the underlying object tree as well to get specific parameter values:

[32]:
for dataset in analysis_joint.datasets:
    print(dataset.background_model.norm.value)
1.1172924904475812
1.118524307524186
1.1111039182297562

Since the joint dataset is made of multiple datasets, we can either: - Look at the residuals for each dataset separately. In this case, we can directly refer to the section Fit quality and model residuals for a MapDataset in this notebook - Look at a stacked residual map.

In the latter case, we need to properly stack the joint dataset before computing the residuals:

[33]:
# TODO: clean this up

# We need to stack on the full geometry, so we use to geom from the stacked counts map.
stacked = MapDataset.from_geoms(
    geom=dataset_stacked.counts.geom,
    geom_exposure=dataset_stacked.exposure.geom,
    geom_edisp=dataset_stacked.edisp.edisp_map.geom,
    geom_psf=dataset_stacked.psf.psf_map.geom,
)

for dataset in analysis_joint.datasets:
    # TODO: Apply mask_fit before stacking
    stacked.stack(dataset)

Then, we can access the stacked model residuals as previously shown in the section Fit quality and model residuals for a MapDataset in this notebook.

Finally, let us compare the spectral results from the stacked and joint fit:

[34]:
def plot_spectrum(model, result, label, color):
    spec = model.spectral_model
    energy_range = [0.3, 10] * u.TeV
    spec.plot(
        energy_range=energy_range, energy_power=2, label=label, color=color
    )
    spec.plot_error(energy_range=energy_range, energy_power=2, color=color)
[35]:
plot_spectrum(model, result, label="stacked", color="tab:blue")
plot_spectrum(model_joint, result_joint, label="joint", color="tab:orange")
plt.legend()
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:323: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:324: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:323: MatplotlibDeprecationWarning: The 'nonposx' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_xscale("log", nonposx="clip")
/home/runner/work/gammapy-docs/gammapy-docs/gammapy/gammapy/modeling/models/spectral.py:324: MatplotlibDeprecationWarning: The 'nonposy' parameter of __init__() has been renamed 'nonpositive' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
  ax.set_yscale("log", nonposy="clip")
[35]:
<matplotlib.legend.Legend at 0x7faa8118bf98>
../_images/tutorials_analysis_3d_59_2.png

Summary

Note that this notebook aims to show you the procedure of a 3D analysis using just a few observations. Results get much better for a more complete analysis considering the GPS dataset from the CTA First Data Challenge (DC-1) and also the CTA model for the Galactic diffuse emission, as shown in the next image:

image1

The complete tutorial notebook of this analysis is available to be downloaded in GAMMAPY-EXTRA repository at https://github.com/gammapy/gammapy-extra/blob/master/analyses/cta_1dc_gc_3d.ipynb).

Exercises

  • Analyse the second source in the field of view: G0.9+0.1 and add it to the combined model.

  • Perform modeling in more details - Add diffuse component, get flux points.