This is a fixed-text formatted version of a Jupyter notebook

Modeling and fitting 2D images using Gammapy

Prerequisites:

  • To understand how a generel modelling and fiiting works in gammapy, please refer to the analysis_3d tutorial

Context:

We often want the determine the position and morphology of an object. To do so, we don’t necessarily have to resort to a full 3D fitting but can perform a simple image fitting, in particular, in an energy range where the PSF does not vary strongly, or if we want to explore a possible energy dependance of the morphology.

Objective:

To localize a source and/or constrain its morphology.

Proposed approach:

The first step here, as in most analysis with DL3 data, is to create reduced datasets. For this, we will use the Analysis class to create a single set of stacked maps with a single bin in energy (thus, an image which behaves as a cube). This, we will then model with a spatial model of our choice, while keeping the spectral model fixed to an integrated power law.

Setup

As usual, we’ll start with some general imports…

[1]:
%matplotlib inline
import astropy.units as u
import numpy as np
from astropy.coordinates import SkyCoord
from astropy.time import Time
from regions import CircleSkyRegion
from astropy.coordinates import Angle

import logging

log = logging.getLogger(__name__)

Now let’s import gammapy specific classes and functions

[2]:
from gammapy.analysis import Analysis, AnalysisConfig

Creating the config file

Now, we create a config file for out analysis. You may load this from disc if you have a pre-defined config file.

Here, we use 3 simulated CTA runs of the galactic center.

[3]:
config = AnalysisConfig()
# Selecting the observations
config.observations.datastore = "$GAMMAPY_DATA/cta-1dc/index/gps/"
config.observations.obs_ids = [110380, 111140, 111159]

Technically, gammapy implements 2D analysis as a special case of 3D analysis (one one bin in energy). So, we must specify the type of analysis as 3D, and define the geometry of the analysis.

[4]:
config.datasets.type = "3d"
config.datasets.geom.wcs.skydir = {
    "lon": "0 deg",
    "lat": "0 deg",
    "frame": "galactic",
}  # The WCS geometry - centered on the galactic center
config.datasets.geom.wcs.fov = {"width": "10 deg", "height": "8 deg"}
config.datasets.geom.wcs.binsize = "0.02 deg"

# The FoV radius to use for cutouts
config.datasets.geom.selection.offset_max = 3.5 * u.deg

# We now fix the energy axis for the counts map - (the reconstructed energy binning)
config.datasets.geom.axes.energy.min = "0.1 TeV"
config.datasets.geom.axes.energy.max = "10 TeV"
config.datasets.geom.axes.energy.nbins = 1

config.datasets.geom.wcs.binsize_irf = 0.2 * u.deg
[5]:
print(config)
AnalysisConfig

    general:
        log: {level: info, filename: null, filemode: null, format: null, datefmt: null}
        outdir: .
    observations:
        datastore: $GAMMAPY_DATA/cta-1dc/index/gps
        obs_ids: [110380, 111140, 111159]
        obs_file: null
        obs_cone: {frame: null, lon: null, lat: null, radius: null}
        obs_time: {start: null, stop: null}
    datasets:
        type: 3d
        stack: true
        geom:
            wcs:
                skydir: {frame: galactic, lon: 0.0 deg, lat: 0.0 deg}
                binsize: 0.02 deg
                fov: {width: 10.0 deg, height: 8.0 deg}
                binsize_irf: 0.2 deg
            selection: {offset_max: 3.5 deg}
            axes:
                energy: {min: 0.1 TeV, max: 10.0 TeV, nbins: 1}
                energy_true: {min: 0.1 TeV, max: 10.0 TeV, nbins: 30}
        map_selection: [counts, exposure, background, psf, edisp]
        background:
            method: null
            exclusion: null
            parameters: {}
        safe_mask:
            methods: [aeff-default]
            parameters: {}
        on_region: {frame: null, lon: null, lat: null, radius: null}
        containment_correction: true
    fit:
        fit_range: {min: 0.1 TeV, max: 10.0 TeV}
    flux_points:
        energy: {min: 0.1 TeV, max: 10.0 TeV, nbins: 30}
        source: source
        parameters: {}

Getting the reduced dataset

We now use the config file and create a single MapDataset containing counts, background, exposure, psf and edisp maps.

[6]:
%%time
analysis = Analysis(config)
analysis.get_observations()
analysis.get_datasets()
Setting logging config: {'level': 'INFO', 'filename': None, 'filemode': None, 'format': None, 'datefmt': None}
Fetching observations.
Number of selected observations: 3
Creating geometry.
Creating datasets.
No background maker set for 3d analysis. Check configuration.
Processing observation 110380
No thresholds defined for obs Observation

        obs id            : 110380
        tstart            : 59235.50
        tstop             : 59235.52
        duration          : 1800.00 s
        pointing (icrs)   : 267.7 deg, -29.6 deg

        deadtime fraction : 2.0%

Processing observation 111140
No thresholds defined for obs Observation

        obs id            : 111140
        tstart            : 59275.50
        tstop             : 59275.52
        duration          : 1800.00 s
        pointing (icrs)   : 264.2 deg, -29.5 deg

        deadtime fraction : 2.0%

Processing observation 111159
No thresholds defined for obs Observation

        obs id            : 111159
        tstart            : 59276.50
        tstop             : 59276.52
        duration          : 1800.00 s
        pointing (icrs)   : 266.0 deg, -27.0 deg

        deadtime fraction : 2.0%

CPU times: user 9.32 s, sys: 3.22 s, total: 12.5 s
Wall time: 12.8 s
[7]:
print(analysis.datasets["stacked"])
MapDataset
----------

  Name                            : stacked

  Total counts                    : 143121
  Total predicted counts          : 125042.02
  Total background counts         : 125042.02

  Exposure min                    : 0.00e+00 m2 s
  Exposure max                    : 1.87e+10 m2 s

  Number of total bins            : 200000
  Number of fit bins              : 186500

  Fit statistic type              : cash
  Fit statistic value (-2 log(L)) : 294443.11

  Number of models                : 1
  Number of parameters            : 3
  Number of free parameters       : 1

  Component 0: BackgroundModel

    Name                      : stacked-bkg
    Datasets names            : ['stacked']
    Parameters:
      norm                    :   1.000
      tilt         (frozen)   :   0.000
      reference    (frozen)   :   1.000  TeV


The counts and background maps have only one bin in reconstructed energy. The exposure and IRF maps are in true energy, and hence, have a different binning based upon the binning of the IRFs. We need not bother about them presently.

[8]:
print(analysis.datasets["stacked"].counts)
WcsNDMap

        geom  : WcsGeom
        axes  : ['lon', 'lat', 'energy']
        shape : (500, 400, 1)
        ndim  : 3
        unit  :
        dtype : float32

[9]:
print(analysis.datasets["stacked"].background_model.map)
WcsNDMap

        geom  : WcsGeom
        axes  : ['lon', 'lat', 'energy']
        shape : (500, 400, 1)
        ndim  : 3
        unit  :
        dtype : float64

[10]:
print(analysis.datasets["stacked"].exposure)
WcsNDMap

        geom  : WcsGeom
        axes  : ['lon', 'lat', 'energy_true']
        shape : (500, 400, 30)
        ndim  : 3
        unit  : m2 s
        dtype : float32

Modelling

Now, we define a model to be fitted to the dataset. The important thing to note here is the dummy spectral model - an integrated powerlaw with only free normalisation. Here, we use its YAML definition to load it:

[11]:
model_config = """
components:
- name: GC-1
  type: SkyModel
  spatial:
    type: PointSpatialModel
    frame: galactic
    parameters:
    - name: lon_0
      value: 0.02
      unit: deg
    - name: lat_0
      value: 0.01
      unit: deg
  spectral:
    type: PowerLaw2SpectralModel
    parameters:
    - name: amplitude
      value: 1.0e-12
      unit: cm-2 s-1
    - name: index
      value: 2.0
      unit: ''
      frozen: true
    - name: emin
      value: 0.1
      unit: TeV
      frozen: true
    - name: emax
      value: 10.0
      unit: TeV
      frozen: true
"""
[12]:
analysis.set_models(model_config)
Reading model.
Models

Component 0: SkyModel

  Name                      : GC-1
  Datasets names            : None
  Spectral model type       : PowerLaw2SpectralModel
  Spatial  model type       : PointSpatialModel
  Temporal model type       : None
  Parameters:
    amplitude               :   1.00e-12  1 / (cm2 s)
    index        (frozen)   :   2.000
    emin         (frozen)   :   0.100  TeV
    emax         (frozen)   :  10.000  TeV
    lon_0                   :   0.020  deg
    lat_0                   :   0.010  deg


We will freeze the parameters of the background

[13]:
analysis.datasets["stacked"].background_model.parameters["norm"].frozen = True
analysis.datasets["stacked"].background_model.parameters["tilt"].frozen = True
[14]:
# To run the fit
analysis.run_fit()
Fitting datasets.
OptimizeResult

        backend    : minuit
        method     : minuit
        success    : False
        message    : Optimization failed.
        nfev       : 24
        total stat : 0.00

[15]:
# To see the best fit values along with the errors
analysis.fit_result.parameters.to_table()
[15]:
Table length=9
namevalueunitminmaxfrozenerror
str9float64str8float64float64boolfloat64
norm1.000e+000.000e+00nanTrue0.000e+00
tilt0.000e+00nannanTrue0.000e+00
reference1.000e+00TeVnannanTrue0.000e+00
amplitude1.000e-12cm-2 s-1nannanFalsenan
index2.000e+00nannanTrue0.000e+00
emin1.000e-01TeVnannanTrue0.000e+00
emax1.000e+01TeVnannanTrue0.000e+00
lon_02.000e-02degnannanFalsenan
lat_01.000e-02degnannanFalsenan
[ ]: