This is a fixed-text formatted version of a Jupyter notebook

Joint 3D Analysis

In this tutorial we show how to run a joint 3D map-based analysis using three example observations of the Galactic center region with CTA. We start with the required imports:

[1]:
%matplotlib inline
import numpy as np
[2]:
from pathlib import Path
from astropy import units as u
from astropy.coordinates import SkyCoord
from regions import CircleSkyRegion
[3]:
from gammapy.data import DataStore
from gammapy.irf import EnergyDispersion, make_psf
from gammapy.maps import WcsGeom, MapAxis, Map
from gammapy.cube import MapMaker, PSFKernel, MapDataset
from gammapy.modeling.models import (
    SkyModel,
    BackgroundModel,
    PowerLawSpectralModel,
    PointSpatialModel,
)
from gammapy.modeling import Fit

Prepare modeling input data

We first use the DataStore object to access the CTA observations and retrieve a list of observations by passing the observations IDs to the .get_observations() method:

[4]:
# Define which data to use and print some information
data_store = DataStore.from_dir("$GAMMAPY_DATA/cta-1dc/index/gps/")
[5]:
# Select some observations from these dataset by hand
obs_ids = [110380, 111140, 111159]
observations = data_store.get_observations(obs_ids)

Prepare input maps

Now we define a reference geometry for our analysis, We choose a WCS based gemoetry with a binsize of 0.02 deg and also define an energy axis:

[6]:
energy_axis = MapAxis.from_edges(
    np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
    skydir=(0, 0),
    binsz=0.02,
    width=(10, 8),
    coordsys="GAL",
    proj="CAR",
    axes=[energy_axis],
)

In addition we define the center coordinate and the FoV offset cut:

[7]:
# Source position
src_pos = SkyCoord(0, 0, unit="deg", frame="galactic")

# FoV max
offset_max = 4 * u.deg

The maps are prepared by calling the MapMaker.run() method and passing the observations. The .run() method returns a Python dict containing a counts, background and exposure map. For the joint analysis, we compute the cube per observation and store the result in the observations_maps dictionary.

[8]:
%%time
observations_data = {}

for obs in observations:
    # For each observation, the map will be centered on the pointing position.
    geom_cutout = geom.cutout(
        position=obs.pointing_radec, width=2 * offset_max
    )
    maker = MapMaker(geom_cutout, offset_max=offset_max)
    maps = maker.run([obs])
    observations_data[obs.obs_id] = maps
WARNING: Tried to get polar motions for times after IERS data is valid. Defaulting to polar motion from the 50-yr mean for those. This may affect precision at the 10s of arcsec level [astropy.coordinates.builtin_frames.utils]
/Users/adonath/github/adonath/gammapy/gammapy/utils/interpolation.py:159: Warning: Interpolated values reached float32 precision limit
  "Interpolated values reached float32 precision limit", Warning
CPU times: user 13.3 s, sys: 2.46 s, total: 15.8 s
Wall time: 15.8 s

Prepare IRFs

PSF and Edisp are estimated for each observation at a specific source position defined by src_pos:

[9]:
# define energy grid for edisp
energy = energy_axis.edges
[10]:
for obs in observations:
    table_psf = make_psf(obs, src_pos)
    psf = PSFKernel.from_table_psf(table_psf, geom, max_radius="0.5 deg")
    observations_data[obs.obs_id]["psf"] = psf

    # create Edisp
    offset = src_pos.separation(obs.pointing_radec)
    edisp = obs.edisp.to_energy_dispersion(
        offset, e_true=energy, e_reco=energy
    )
    observations_data[obs.obs_id]["edisp"] = edisp

Save maps as well as IRFs to disk:

[11]:
for obs_id in obs_ids:
    path = Path("analysis_3d_joint") / "obs_{}".format(obs_id)
    path.mkdir(parents=True, exist_ok=True)

    for key in ["counts", "exposure", "background", "edisp", "psf"]:
        filename = "{}.fits.gz".format(key)
        observations_data[obs_id][key].write(path / filename, overwrite=True)

Likelihood fit

Reading maps and IRFs

As first step we define a source model:

[12]:
spatial_model = PointSpatialModel(
    lon_0="-0.05 deg", lat_0="-0.05 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
    index=2.4, amplitude="2.7e-12 cm-2 s-1 TeV-1", reference="1 TeV"
)
model = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)

Now we read the maps and IRFs and create the dataset for each observation:

[13]:
datasets = []

for obs_id in obs_ids:
    path = Path("analysis_3d_joint") / "obs_{}".format(obs_id)

    # read counts map and IRFs
    counts = Map.read(path / "counts.fits.gz")
    exposure = Map.read(path / "exposure.fits.gz")

    psf = PSFKernel.read(path / "psf.fits.gz")
    edisp = EnergyDispersion.read(path / "edisp.fits.gz")

    # create background model per observation / dataset
    background = Map.read(path / "background.fits.gz")
    background_model = BackgroundModel(background)
    background_model.tilt.frozen = False
    background_model.norm.value = 1.3

    # optionally define a safe energy threshold
    emin = None
    mask = counts.geom.energy_mask(emin=emin)

    dataset = MapDataset(
        model=model,
        counts=counts,
        exposure=exposure,
        psf=psf,
        edisp=edisp,
        background_model=background_model,
        mask_fit=mask,
    )

    datasets.append(dataset)
[14]:
fit = Fit(datasets)
[15]:
%%time
result = fit.run()
CPU times: user 25.9 s, sys: 2.08 s, total: 28 s
Wall time: 28 s
[16]:
print(result)
OptimizeResult

        backend    : minuit
        method     : minuit
        success    : True
        message    : Optimization terminated successfully.
        nfev       : 412
        total stat : 922625.68

Best fit parameters:

[17]:
fit.datasets.parameters.to_table()
[17]:
Table length=14
namevalueerrorunitminmaxfrozen
str9float64float64str14float64float64bool
lon_0-5.249e-02nandegnannanFalse
lat_0-5.205e-02nandeg-9.000e+019.000e+01False
index2.366e+00nannannanFalse
amplitude2.667e-12nancm-2 s-1 TeV-1nannanFalse
reference1.000e+00nanTeVnannanTrue
norm1.311e+00nan0.000e+00nanFalse
tilt-1.091e-01nannannanFalse
reference1.000e+00nanTeVnannanTrue
norm1.310e+00nan0.000e+00nanFalse
tilt-1.064e-01nannannanFalse
reference1.000e+00nanTeVnannanTrue
norm1.324e+00nan0.000e+00nanFalse
tilt-1.186e-01nannannanFalse
reference1.000e+00nanTeVnannanTrue

The information which parameter belongs to which dataset is not listed explicitely in the table (yet), but the order of parameters is conserved. You can always access the underlying object tree as well to get specific parameter values:

[18]:
for dataset in datasets:
    print(dataset.background_model.norm.value)
1.310714801107288
1.309598941469269
1.324403094628361

Plotting residuals

Each MapDataset object is equipped with a method called plot.residuals(), which displays the spatial and spectral residuals (computed as counts-model) for the dataset. Optionally, these can be normalized as (counts-model)/model or (counts-model)/sqrt(model), by passing the parameter norm='model or norm=sqrt_model.

First of all, let’s define a region for the spectral extraction:

[19]:
region = CircleSkyRegion(spatial_model.position, radius=0.15 * u.deg)

We can now inspect the residuals for each dataset, separately:

[20]:
ax_image, ax_spec = datasets[0].plot_residuals(
    region=region, vmin=-0.5, vmax=0.5, method="diff"
)
../_images/notebooks_analysis_3d_joint_34_0.png
[21]:
datasets[1].plot_residuals(region=region, vmin=-0.5, vmax=0.5);
../_images/notebooks_analysis_3d_joint_35_0.png
[22]:
datasets[2].plot_residuals(region=region, vmin=-0.5, vmax=0.5);
../_images/notebooks_analysis_3d_joint_36_0.png

Finally, we can compute a stacked residual map:

[23]:
residuals_stacked = Map.from_geom(geom)

for dataset in datasets:
    residuals = dataset.residuals()
    coords = residuals.geom.get_coord()

    residuals_stacked.fill_by_coord(coords, residuals.data)
[24]:
residuals_stacked.sum_over_axes().smooth("0.1 deg").plot(
    vmin=-1, vmax=1, cmap="coolwarm", add_cbar=True, stretch="linear"
);
../_images/notebooks_analysis_3d_joint_39_0.png
[ ]: